Cisco Blogs


Cisco Blog > High Performance Computing Networking

MPI User Survey: Raw Data

March 13, 2010 at 12:00 pm PST

Earlier this week, Josh Hursey and I presented some in-depth results analysis of the MPI user community survey at the MPI Forum meeting in San Jose, CA (hosted by Cisco — yay!).  Remember that the survey is intended to help the MPI Forum guide the MPI-3 standardization process.  We had a fabulous response rate: 1,401 respondents started the survey, 838 respondents completed it (almost 60%).

Some of the results were actually quite fascinating (I’ll talk about them in future blog entries), but Josh and I need to give the following disclaimers:

  1. We are not statisticians.  We tried to be accurate, rigorous, and unbiased in our analysis, but we may not have done it right.
  2. We only presented the answers to a specific set of questions posed to us by the Forum at the January meeting.

As such, we have decided to release the raw data of the survey to the general HPC community.  It is our hope that others will also analyze this data and share their findings with the community. 

Read More »

Why MPI is Good for You

March 6, 2010 at 12:00 pm PST

If ever I doubted that MPI was good for the world, I think that all I would need to do is remind myself of this commit that I made into the Open MPI source code repository today.  It was a single-character change — changing a 0 to a 1.  But the commit log message was Tolstoyian in length:

  • 87 lines of text
  • 736 words
  • 4225 characters

Go ahead — read the commit message.  I double-dog dare you.

That tome of a commit message both represents several months of on-and-off work on a single bug, and details the hard-won knowledge that was required to understand why changing a 0 to a 1 fixed a bug.

Ouch.

Read More »

Tags: , ,

OpenFabrics Sonoma Workshop 2010

February 24, 2010 at 12:00 pm PST

OpenFabrics Software logoThis has been posted elsewhere, but it’s worth mentioning here to both because iWarp and InfiniBand are popular HPC interconnects, and to get as wide an audience as possible: the OpenFabrics Association is hosting its annual workshop in Sonoma on March 14-17, 2010.

The theme of this year’s Sonoma Workshop is “Exascale to Enterprise.”  A preliminary agenda is available for your browsing pleasure. 

Yes, I’ll be there.  Will you?

Read More »

…but what about mpif.h?

February 15, 2010 at 12:00 pm PST

What to do about the implicit Fortran MPI interfaces (i.e., mpif.h) in MPI-3?  This is something that I’ve been thinking about a lot recently.

Sidenote: Some people refer to mpif.h as “the Fortran 77 MPI interfaces.”  That isn’t quite correct; there’s actually stuff in mpif.h that didn’t exist until well beyond the Fortran 77 specification, such as KIND attributes and whatnot.  So if someone calls mpif.h “the Fortran 77 MPI interfaces”, you have my permission to give them a slugbug punch.  Ditto if they call an “MPI process” a “rank.”

As I’ve mentioned in prior entries, we’re going to have much-updated explicit Fortran interfaces in MPI-3 (the so-called “Fortran ’03 interfaces”, but just like “the Fortran 77 interfaces”, that name isn’t quite accurate, either).  As I swear I heard Snoop Dog say once, “These new Fortran explicit MPI interfaces are da fa-schizzle”.  They offer a bunch of language features that MPI ignored before, and also fix some long-standing problems — most importantly with regards to asynchronous buffer control.

So the question isn’t so much about what to do for the future; it’s more a question about what to do with the past.  Should we deprecate the old, ancient, decrepit, harmful, stanky, nasty Fortran implicit interfaces?  The answer is not quite as obvious as I would hope.

Read More »

RCE Cast: SC10 Student cluster competition

January 30, 2010 at 12:00 pm PST

We recorded an RCE podcast earlier today talking to Tiki L. Suarez-Brown, Ph.D, and Hai Ah Nam, Ph.D, the two co-chairs of the SC10 student cluster competition, and Doug Smith, the faculty sponsor of the Colorado cluster competition team from the past few years. Scheduling to get everyone together for the recording was a bit dicey; the recording will likely be available a little later than usual (Brock usually releases recordings on Saturdays — this one will likely be out early next week).

The competition is no cake walk: teams of students get a very specific power budget (26 amps) to run a whole schlew of real-world HPC applications within a limited time frame.  The teams are graded on several metrics, to include the highest Linpak number, most computational work processed in the time allotted, a question-and-answer interview, etc. 

Fun fact: 26 amps is about how much you need to run 3 coffee makers.

Read More »