Cisco Blogs


Cisco Blog > High Performance Computing Networking

mpicc != mpicc

In my last post, I talked about why MPI wrapper compilers are Good for you.  The short version is that it is faaar easier to use a wrapper compiler than to force users to figure out what compiler and linker flags the MPI implementation needs — because sometimes they need a lot of flags.

Hence, MPI wrappers are Good for you.  They can save you a lot of pain.

That being said, they can also hurt portability, as one user noted on the Open MPI user’s mailing list recently.

Read More »

Tags: , , ,

Why MPI “wrapper” compilers are Good for you

An interesting thread on the Open MPI user’s mailing list came up the other day: a user wanted Open MPI’s “mpicc” wrapper compiler to accept the same command line options as MPICH’s “mpicc” wrapper.  On the surface, this is a very reasonable request.  After all, MPI is all about portability — so why not make the wrapper compilers be the same?

Unfortunately, this request exposes a can of worms and at least one unfortunate truth: the MPI API is portable, but other aspects of MPI are not, such as compiling/linking MPI applications, launching MPI jobs, etc.

Let’s first explore what wrapper compilers are, and why they’re good for you.

Read More »

Tags: , , ,

Open MPI v1.5.4 released

We released Open MPI v1.5.4 last week.  Woo hoo!

I can’t exactly predict the future, but I anticipate having one more release before transitioning it to 1.6 (i.e., transitioning it from a “feature” release series to a “stable” release series where only bug fixes will be applied).

The v1.5 series is actually progressing quite nicely towards v1.6.  It has gotten a lot of run time on real-world machines in production environments, and many bugs have shaken out.  And there are many new, shiny toys on our development trunk that are slated for v1.7 (i.e., they won’t go into the v1.5/v1.6 series).

Read More »

Tags: , , ,

MPI run-time at large scale

With the news that Open MPI is being used on the K supercomputer (i.e., the #1 machine on the June 2011 Top500 list), another colleague of mine, Ralph Castain — who focuses on the run-time system in Open MPI — pointed out that K has over 80,000 processors (over 640K cores!).  That’s ginormous.

He was musing to me that it would be fascinating to see some of K’s run-time data for what most people don’t consider too interesting / sexy: MPI job launch performance.

For example, another public use of Open MPI is on Los Alamos National Lab’s RoadRunner, which has 3,000+ nodes at 4 processes per node (remember RoadRunner?  It was #1 for a while, too).

It’s worth noting that Open MPI starts up full-scale jobs on RoadRunner — meaning that all processes complete MPI_INIT — in less than 1 minute.

Read More »

Tags: , , ,

Open MPI powers 8 petaflops

A huge congratulations goes goes out to the RIKEN Advanced Institute for Computational Science and Fujitsu teams who saw the K supercomputer achieve over 8 petaflops in the June 2011 Top500 list, published this past week.

8 petaflops absolutely demolishes the prior record of about 2.5 petaflops.  Well done!

A sharp-eyed user pointed out the fact that Open MPI was referenced in the “Programming on K Computer” Fujitsu slides (which is part of the overall SC10 Presentation Download Fujitsu site).  I pinged my Fujitsu colleague on the MPI Forum, Shinji Sumimoto, to ask for a few more details — does K actually use Open MPI with some customizations for their specialized network?  And did Open MPI power the 8 petaflop runs at an amazing 93% efficiency?

Read More »

Tags: , , ,