Cisco Blogs


Cisco Blog > High Performance Computing Networking

…but what about mpif.h?

February 15, 2010 at 12:00 pm PST

What to do about the implicit Fortran MPI interfaces (i.e., mpif.h) in MPI-3?  This is something that I’ve been thinking about a lot recently.

Sidenote: Some people refer to mpif.h as “the Fortran 77 MPI interfaces.”  That isn’t quite correct; there’s actually stuff in mpif.h that didn’t exist until well beyond the Fortran 77 specification, such as KIND attributes and whatnot.  So if someone calls mpif.h “the Fortran 77 MPI interfaces”, you have my permission to give them a slugbug punch.  Ditto if they call an “MPI process” a “rank.”

As I’ve mentioned in prior entries, we’re going to have much-updated explicit Fortran interfaces in MPI-3 (the so-called “Fortran ’03 interfaces”, but just like “the Fortran 77 interfaces”, that name isn’t quite accurate, either).  As I swear I heard Snoop Dog say once, “These new Fortran explicit MPI interfaces are da fa-schizzle”.  They offer a bunch of language features that MPI ignored before, and also fix some long-standing problems — most importantly with regards to asynchronous buffer control.

So the question isn’t so much about what to do for the future; it’s more a question about what to do with the past.  Should we deprecate the old, ancient, decrepit, harmful, stanky, nasty Fortran implicit interfaces?  The answer is not quite as obvious as I would hope.

Read More »

RCE Cast: SC10 Student cluster competition

January 30, 2010 at 12:00 pm PST

We recorded an RCE podcast earlier today talking to Tiki L. Suarez-Brown, Ph.D, and Hai Ah Nam, Ph.D, the two co-chairs of the SC10 student cluster competition, and Doug Smith, the faculty sponsor of the Colorado cluster competition team from the past few years. Scheduling to get everyone together for the recording was a bit dicey; the recording will likely be available a little later than usual (Brock usually releases recordings on Saturdays — this one will likely be out early next week).

The competition is no cake walk: teams of students get a very specific power budget (26 amps) to run a whole schlew of real-world HPC applications within a limited time frame.  The teams are graded on several metrics, to include the highest Linpak number, most computational work processed in the time allotted, a question-and-answer interview, etc. 

Fun fact: 26 amps is about how much you need to run 3 coffee makers.

Read More »

SGE debuts topology-aware scheduling

January 23, 2010 at 12:00 pm PST

I just ran across a great blog entry about SGE debuting topology-aware scheduling.  Dan Templeton does a great job of describing the need for processor topology-aware job scheduling within a server.  Many MPI jobs fit exactly within his description of applications that have “serious resource needs” — they typically require lots of CPU and/or network (or other I/O).  Hence, scheduling an MPI job intelligently across not only the network, but also across the network and resources inside the server, is pretty darn important.  It’s all about location, location, location!

Particularly as core counts in individual server are going up. 

Particularly as networks get more complicated inside individual servers. 

Particularly if heterogeneous computing inside a single server becomes popular.

Particularly as resources are now pretty much guaranteed to be non-uniform within an individual server.

These are exactly the reasons that, even though I’m a network middleware developer, I spend time with server-specific projects like hwloc — you really have to take a holistic approach in order to maximize performance.

Read More »

Tags: , , , ,

Happy 1 year anniversary, RCE-Cast!

January 13, 2010 at 12:00 pm PST

 

We were recording an RCE-Cast with the PETSc guys when we realized that we had just about hit our 1 year anniversary; the first recording was posted on January 17, 2009.  Wow!  I had no idea that we had been doing this so long — Brock and I are both very pleasantly surprised that we’ve managed to keep it going this long.

If you’re unaware of RCE-Cast, it’s a podcast about “Research Computing and Engineering” that Brock Palen and I record every two weeks.  We talk to a variety of software and hardware projects, and/or any other topic that seems to be related to HPC- or RCE-like things. 

Here’s an experiment for our next interview with the Condor folks: “tweet @brockpalen questions for #condor http://tinyurl.com/hqzhm next guest on #RCE“.

Read More »

MPI-3 User survey: thank you!

January 8, 2010 at 12:00 pm PST

We had an astonishing 837 responses to the MPI User Survey.  Many thanks to all of you who filled out the survey!

The MPI Forum minions are busy analyzing the data — there’s a lot!  We’ll have more definitive results later, but for now, see below the jump for a few quickie facts from the results.

Read More »