Cisco Blogs


Cisco Blog > High Performance Computing Networking

SC’11, Open MPI BOF, and 10 petaflops!

November 6, 2011 at 6:07 am PST

SC'11 logoI’m sure most everyone has heard already, but the K supercomputer has been upgraded and now reaches over 10 petaflops.  Wow!

10.51 petaflops, actually, so if you round up, you can say that they “turned it up to 11.”  Ahem.

We’ll actually have Shinji Sumimoto from the K team speaking during the Open MPI BOF at SC’11.  Rolf vandeVaart from NVIDIA will also be discussing their role in Open MPI during the BOF.

We have the 12:15-1:15pm timeslot on Wednesday (room TCC 303); come join us to hear about the present status and future plans for Open MPI.

Tags: , ,

mpicc != mpicc

September 26, 2011 at 5:00 am PST

In my last post, I talked about why MPI wrapper compilers are Good for you.  The short version is that it is faaar easier to use a wrapper compiler than to force users to figure out what compiler and linker flags the MPI implementation needs — because sometimes they need a lot of flags.

Hence, MPI wrappers are Good for you.  They can save you a lot of pain.

That being said, they can also hurt portability, as one user noted on the Open MPI user’s mailing list recently.

Read More »

Tags: , , ,

Why MPI “wrapper” compilers are Good for you

September 23, 2011 at 5:20 am PST

An interesting thread on the Open MPI user’s mailing list came up the other day: a user wanted Open MPI’s “mpicc” wrapper compiler to accept the same command line options as MPICH’s “mpicc” wrapper.  On the surface, this is a very reasonable request.  After all, MPI is all about portability — so why not make the wrapper compilers be the same?

Unfortunately, this request exposes a can of worms and at least one unfortunate truth: the MPI API is portable, but other aspects of MPI are not, such as compiling/linking MPI applications, launching MPI jobs, etc.

Let’s first explore what wrapper compilers are, and why they’re good for you.

Read More »

Tags: , , ,

Open MPI v1.5.4 released

August 22, 2011 at 8:45 am PST

We released Open MPI v1.5.4 last week.  Woo hoo!

I can’t exactly predict the future, but I anticipate having one more release before transitioning it to 1.6 (i.e., transitioning it from a “feature” release series to a “stable” release series where only bug fixes will be applied).

The v1.5 series is actually progressing quite nicely towards v1.6.  It has gotten a lot of run time on real-world machines in production environments, and many bugs have shaken out.  And there are many new, shiny toys on our development trunk that are slated for v1.7 (i.e., they won’t go into the v1.5/v1.6 series).

Read More »

Tags: , , ,

MPI run-time at large scale

June 28, 2011 at 5:00 am PST

With the news that Open MPI is being used on the K supercomputer (i.e., the #1 machine on the June 2011 Top500 list), another colleague of mine, Ralph Castain — who focuses on the run-time system in Open MPI — pointed out that K has over 80,000 processors (over 640K cores!).  That’s ginormous.

He was musing to me that it would be fascinating to see some of K’s run-time data for what most people don’t consider too interesting / sexy: MPI job launch performance.

For example, another public use of Open MPI is on Los Alamos National Lab’s RoadRunner, which has 3,000+ nodes at 4 processes per node (remember RoadRunner?  It was #1 for a while, too).

It’s worth noting that Open MPI starts up full-scale jobs on RoadRunner — meaning that all processes complete MPI_INIT — in less than 1 minute.

Read More »

Tags: , , ,