Cisco Blogs


Cisco Blog > High Performance Computing Networking

MPI User Survey: Fun Results

March 19, 2010 at 12:00 pm PST

Here’s some fun results that we gleaned from the MPI user community survey…

Respondents were asked how much they valued each of the following in MPI on a scale from 1=most important to 5=least important (each item could be rated individually):

  • Runtime performance (e.g., latency, bandwidth, resource consumption, etc.)
  • Feature-rich API
  • Run-time reliability
  • Scalability to large numbers of MPI processes
  • Integration with other middleware, communication protocols, etc.

The first item in the list — runtime performance — may seem silly.  After all, this is high performance computing.  Many on the Forum assumed that everyone would rank runtime performance as the most important thing.  They were wrong (!).

Read More »

MPI User Survey: Raw Data

March 13, 2010 at 12:00 pm PST

Earlier this week, Josh Hursey and I presented some in-depth results analysis of the MPI user community survey at the MPI Forum meeting in San Jose, CA (hosted by Cisco — yay!).  Remember that the survey is intended to help the MPI Forum guide the MPI-3 standardization process.  We had a fabulous response rate: 1,401 respondents started the survey, 838 respondents completed it (almost 60%).

Some of the results were actually quite fascinating (I’ll talk about them in future blog entries), but Josh and I need to give the following disclaimers:

  1. We are not statisticians.  We tried to be accurate, rigorous, and unbiased in our analysis, but we may not have done it right.
  2. We only presented the answers to a specific set of questions posed to us by the Forum at the January meeting.

As such, we have decided to release the raw data of the survey to the general HPC community.  It is our hope that others will also analyze this data and share their findings with the community. 

Read More »

Why MPI is Good for You

March 6, 2010 at 12:00 pm PST

If ever I doubted that MPI was good for the world, I think that all I would need to do is remind myself of this commit that I made into the Open MPI source code repository today.  It was a single-character change — changing a 0 to a 1.  But the commit log message was Tolstoyian in length:

  • 87 lines of text
  • 736 words
  • 4225 characters

Go ahead — read the commit message.  I double-dog dare you.

That tome of a commit message both represents several months of on-and-off work on a single bug, and details the hard-won knowledge that was required to understand why changing a 0 to a 1 fixed a bug.

Ouch.

Read More »

Tags: , ,

OpenFabrics Sonoma Workshop 2010

February 24, 2010 at 12:00 pm PST

OpenFabrics Software logoThis has been posted elsewhere, but it’s worth mentioning here to both because iWarp and InfiniBand are popular HPC interconnects, and to get as wide an audience as possible: the OpenFabrics Association is hosting its annual workshop in Sonoma on March 14-17, 2010.

The theme of this year’s Sonoma Workshop is “Exascale to Enterprise.”  A preliminary agenda is available for your browsing pleasure. 

Yes, I’ll be there.  Will you?

Read More »

…but what about mpif.h?

February 15, 2010 at 12:00 pm PST

What to do about the implicit Fortran MPI interfaces (i.e., mpif.h) in MPI-3?  This is something that I’ve been thinking about a lot recently.

Sidenote: Some people refer to mpif.h as “the Fortran 77 MPI interfaces.”  That isn’t quite correct; there’s actually stuff in mpif.h that didn’t exist until well beyond the Fortran 77 specification, such as KIND attributes and whatnot.  So if someone calls mpif.h “the Fortran 77 MPI interfaces”, you have my permission to give them a slugbug punch.  Ditto if they call an “MPI process” a “rank.”

As I’ve mentioned in prior entries, we’re going to have much-updated explicit Fortran interfaces in MPI-3 (the so-called “Fortran ’03 interfaces”, but just like “the Fortran 77 interfaces”, that name isn’t quite accurate, either).  As I swear I heard Snoop Dog say once, “These new Fortran explicit MPI interfaces are da fa-schizzle”.  They offer a bunch of language features that MPI ignored before, and also fix some long-standing problems — most importantly with regards to asynchronous buffer control.

So the question isn’t so much about what to do for the future; it’s more a question about what to do with the past.  Should we deprecate the old, ancient, decrepit, harmful, stanky, nasty Fortran implicit interfaces?  The answer is not quite as obvious as I would hope.

Read More »