Cisco Blogs


Cisco Blog > High Performance Computing Networking

Why MPI “wrapper” compilers are Good for you

September 23, 2011 at 5:20 am PST

An interesting thread on the Open MPI user’s mailing list came up the other day: a user wanted Open MPI’s “mpicc” wrapper compiler to accept the same command line options as MPICH’s “mpicc” wrapper.  On the surface, this is a very reasonable request.  After all, MPI is all about portability — so why not make the wrapper compilers be the same?

Unfortunately, this request exposes a can of worms and at least one unfortunate truth: the MPI API is portable, but other aspects of MPI are not, such as compiling/linking MPI applications, launching MPI jobs, etc.

Let’s first explore what wrapper compilers are, and why they’re good for you.

Read More »

Tags: , , ,

Open MPI v1.5.4 released

August 22, 2011 at 8:45 am PST

We released Open MPI v1.5.4 last week.  Woo hoo!

I can’t exactly predict the future, but I anticipate having one more release before transitioning it to 1.6 (i.e., transitioning it from a “feature” release series to a “stable” release series where only bug fixes will be applied).

The v1.5 series is actually progressing quite nicely towards v1.6.  It has gotten a lot of run time on real-world machines in production environments, and many bugs have shaken out.  And there are many new, shiny toys on our development trunk that are slated for v1.7 (i.e., they won’t go into the v1.5/v1.6 series).

Read More »

Tags: , , ,

MPI run-time at large scale

June 28, 2011 at 5:00 am PST

With the news that Open MPI is being used on the K supercomputer (i.e., the #1 machine on the June 2011 Top500 list), another colleague of mine, Ralph Castain — who focuses on the run-time system in Open MPI — pointed out that K has over 80,000 processors (over 640K cores!).  That’s ginormous.

He was musing to me that it would be fascinating to see some of K’s run-time data for what most people don’t consider too interesting / sexy: MPI job launch performance.

For example, another public use of Open MPI is on Los Alamos National Lab’s RoadRunner, which has 3,000+ nodes at 4 processes per node (remember RoadRunner?  It was #1 for a while, too).

It’s worth noting that Open MPI starts up full-scale jobs on RoadRunner — meaning that all processes complete MPI_INIT — in less than 1 minute.

Read More »

Tags: , , ,

Open MPI powers 8 petaflops

June 25, 2011 at 6:20 pm PST

A huge congratulations goes goes out to the RIKEN Advanced Institute for Computational Science and Fujitsu teams who saw the K supercomputer achieve over 8 petaflops in the June 2011 Top500 list, published this past week.

8 petaflops absolutely demolishes the prior record of about 2.5 petaflops.  Well done!

A sharp-eyed user pointed out the fact that Open MPI was referenced in the “Programming on K Computer” Fujitsu slides (which is part of the overall SC10 Presentation Download Fujitsu site).  I pinged my Fujitsu colleague on the MPI Forum, Shinji Sumimoto, to ask for a few more details — does K actually use Open MPI with some customizations for their specialized network?  And did Open MPI power the 8 petaflop runs at an amazing 93% efficiency?

Read More »

Tags: , , ,

A bucket full of new MPI Fortran features

May 23, 2011 at 6:46 am PST

Over this past weekend, I had the motivation and time to overhaul Open MPI’s Fortran support for the better.  Points worth noting:

  • The “use mpi” module now includes all MPI subroutines.  Strict type checking for everything!
  • Open MPI now only uses a single Fortran compiler — there’s no more artificial division between “f77″ and “f90″

There’s still work to be done, of course (this is still off in a Mercurial bitbucket repo — not in the Open MPI main line SVN trunk yet), but the results of this weekend code sprint are significantly simpler Open MPI Fortran plumbing behind the scenes and a much, much better implementation of the MPI-2 “use mpi” Fortran bindings.

Read More »

Tags: , , ,