Cisco Blogs


Cisco Blog > High Performance Computing Networking

MPI run-time at large scale

June 28, 2011 at 5:00 am PST

With the news that Open MPI is being used on the K supercomputer (i.e., the #1 machine on the June 2011 Top500 list), another colleague of mine, Ralph Castain — who focuses on the run-time system in Open MPI — pointed out that K has over 80,000 processors (over 640K cores!).  That’s ginormous.

He was musing to me that it would be fascinating to see some of K’s run-time data for what most people don’t consider too interesting / sexy: MPI job launch performance.

For example, another public use of Open MPI is on Los Alamos National Lab’s RoadRunner, which has 3,000+ nodes at 4 processes per node (remember RoadRunner?  It was #1 for a while, too).

It’s worth noting that Open MPI starts up full-scale jobs on RoadRunner — meaning that all processes complete MPI_INIT — in less than 1 minute.

Read More »

Tags: , , ,

Open MPI powers 8 petaflops

June 25, 2011 at 6:20 pm PST

A huge congratulations goes goes out to the RIKEN Advanced Institute for Computational Science and Fujitsu teams who saw the K supercomputer achieve over 8 petaflops in the June 2011 Top500 list, published this past week.

8 petaflops absolutely demolishes the prior record of about 2.5 petaflops.  Well done!

A sharp-eyed user pointed out the fact that Open MPI was referenced in the “Programming on K Computer” Fujitsu slides (which is part of the overall SC10 Presentation Download Fujitsu site).  I pinged my Fujitsu colleague on the MPI Forum, Shinji Sumimoto, to ask for a few more details — does K actually use Open MPI with some customizations for their specialized network?  And did Open MPI power the 8 petaflop runs at an amazing 93% efficiency?

Read More »

Tags: , , ,

A bucket full of new MPI Fortran features

May 23, 2011 at 6:46 am PST

Over this past weekend, I had the motivation and time to overhaul Open MPI’s Fortran support for the better.  Points worth noting:

  • The “use mpi” module now includes all MPI subroutines.  Strict type checking for everything!
  • Open MPI now only uses a single Fortran compiler — there’s no more artificial division between “f77″ and “f90″

There’s still work to be done, of course (this is still off in a Mercurial bitbucket repo — not in the Open MPI main line SVN trunk yet), but the results of this weekend code sprint are significantly simpler Open MPI Fortran plumbing behind the scenes and a much, much better implementation of the MPI-2 “use mpi” Fortran bindings.

Read More »

Tags: , , ,

Building 3rd party Open MPI plugins

January 20, 2011 at 11:47 am PST

Over the past several years, multiple organizations have approached me asking how to develop their own plugins outside of the official Open MPI tree.  As a community, Open MPI hasn’t really been good about providing a good example of how to do this.

Today, I published three examples of compiling Open MPI plugins outside of the official source tree.  A Mercurial repository is freely clonable from my Bitbucket hosting:

(MOVED: See below)

This repository might get moved somewhere more official (e.g., inside Open MPI’s SVN), but for the moment, it’s an easily-publishable location for sharing with the world.

(UPDATE: the code has been moved to the main Open MPI SVN repository; look under contrib/build-mca-comps-outside-of-tree in the trunk and release branches starting with v1.4)

Read More »

Tags: , ,

Do you MPI-2.2?

October 26, 2010 at 4:35 pm PST

Open question to MPI developers: are you using the features added in MPI-2.2?

I ask because I took a little heat in the last MPI Forum meeting for not driving Open MPI to be MPI-2.2 compliant (Open MPI is MPI-2.1 compliant; there’s 4 open tickets that need to be completed for full MPI-2.2 compliance).

But I’m having a hard time finding users who want or need these specific functionalities (admittedly, they’re somewhat obscure).  We’ll definitely get to these items someday — the question is whether that someday needs to be soon or whether it can be a while from now.

Read More »

Tags: ,