Cisco Blogs


Cisco Blog > High Performance Computing Networking

Can we count on MPI to handle large datasets?

April 22, 2011 at 2:25 pm PST

(today’s entry is guest-written by Fab Tillier, Microsoft MPI engineer extraordinaire)

When you send data in MPI, you specify how many items of a particular datatype you want to send in your call to an MPI send routine.  Likewise, when you read data from a file, you specify how many datatype elements to read.

This “how many” value is referred to in MPI as a count parameter, and all of MPI’s functions define count parameters as integers: int in C, INTEGER in Fortran.  This definition often limits users to 231 elements (i.e., roughly two billion elements) because int and INTEGER default to 32 bits on many of today’s platforms.

That may sound pretty big, but consider that a 231 byte file is not really that large by today’s standards — especially in HPC, where datasets can sometimes be terabytes in size.  Reading a ~2 gigabyte file can take (far) less than a second.  Read More »

Tags: , ,

What is MPI_MPROBE?

April 15, 2011 at 5:17 pm PST

Here’s some MPI quick-bites for this week:

  • The MPI_MPROBE proposal was voted into MPI-3 a few weeks ago.  Yay! (see this quick slideshow for an explanation of what MPI_MPROBE is)
  • The Hardware Locality project just released hwloc v1.2.  This new version now includes distance metrics between objects in the topology tree.  W00t!
  • Support for large counts looks good for getting passed into MPI-3; it’s up for its first formal reading at the upcoming Forum meeting.
  • The same is true for the new MPI-3 one-sided stuff; it, too, is up for its first formal reading at the upcoming Forum meeting (they haven’t sent around their new PDF yet, but they will within a week or so — stay tuned here for updates).
  • Likewise, the new Fortran-08 bindings are up for their first Forum presentation next meeting.  We solved all of the outstanding Fortran issues with the F77 and F90 bindings… with the possible exception of non-blocking communication code movement.  :-(  That one is still being debated with the Fortran language standardization body — it’s a complicated issue!
  • Finally — the new MPI tools interface chapter is up for a first formal reading, too.

WHEW!

That’s a lot of first formal readings in one meeting…

Tags: , , , ,

Trust, but verify: good science

March 25, 2011 at 10:27 am PST

A recent exchange on the Open MPI users’ list turned up a minor bug in our code base.  The bug had to do with how Open MPI reported a settings value through our configuration querying tool (“ompi_info”).

The code using the configuration value in question was doing the Right Things, but the tool was effectively reporting the wrong value.  This led to some confusion on the mailing list, resulting in a bug fix being pushed upstream and the user concluding, “Trust, but verify.”

Very true!

Read More »

Tags: , ,

More on Memory Affinity

March 18, 2011 at 7:35 am PST

There was a great comment chain on my prior post (“Unexpected Linux Memory Migration“) which brought out a number of good points.  Let me clarify a few things from my post:

  • My comments were definitely about HPC types of applications, which are admittedly a small subset of applications that run on Linux.  It is probably a fair statement to say that the OS’s treatment of memory affinity will be just fine for most (non-HPC) applications.
  • Note, however, that Microsoft Windows and Solaris do retain memory affinity information when pages are swapped out.  When the pages are swapped back in, if they were bound to a specific locality before swapping, they are restored to that same locality.  This is why I was a bit surprised by Linux’s behavior.
  • More specifically, Microsoft Windows and Solaris seem to treat memory locality as a binding decision — Linux treats it as a hint.
  • Many (most?) HPC applications are designed not to cause paging.  However, at least some do.  A side point of this blog is that HPC is becoming commoditized — not everyone is out at the bleeding edge (meaning: some people willingly violate the “do not page” HPC mantra and are willing to give up a little performance in exchange for the other benefits that swapping provides).

To be clear, Open MPI has a few cases where it has very specific memory affinity needs that almost certainly fall outside the realm of just about all OS’s default memory placement schemes.  My point is that other applications may also have similar requirements, particularly as core counts are going up, and therefore communication between threads / processes on different cores will become more common.

Read More »

Tags: , , , , ,

Unexpected Linux memory migration

March 4, 2011 at 7:20 am PST

I learned something disturbing earlier this week: if you allocate memory in Linux to a particular NUMA location and then that memory is paged out, it will lose that memory binding when it is paged back it.

Yowza!

Core counts are going up, and server memory networks are getting more complex; we’re effectively increasing the NUMA-ness of memory.  The specific placement of your data in memory is becoming (much) more important; it’s all about location, Location, LOCATION!

But unless you are very, very careful, your data may not be in the location that you think it is — even if you thought you had bound it to a specific NUMA node.

Read More »

Tags: , , , , ,