Cisco Blogs


Cisco Blog > High Performance Computing Networking

Open MPI powers 8 petaflops

June 25, 2011 at 6:20 pm PST

A huge congratulations goes goes out to the RIKEN Advanced Institute for Computational Science and Fujitsu teams who saw the K supercomputer achieve over 8 petaflops in the June 2011 Top500 list, published this past week.

8 petaflops absolutely demolishes the prior record of about 2.5 petaflops.  Well done!

A sharp-eyed user pointed out the fact that Open MPI was referenced in the “Programming on K Computer” Fujitsu slides (which is part of the overall SC10 Presentation Download Fujitsu site).  I pinged my Fujitsu colleague on the MPI Forum, Shinji Sumimoto, to ask for a few more details — does K actually use Open MPI with some customizations for their specialized network?  And did Open MPI power the 8 petaflop runs at an amazing 93% efficiency?

Read More »

Tags: , , ,

The commoditization of high performance computing

June 14, 2011 at 5:00 am PST

High Performance Computing (HPC) used to be the exclusive domain of supercomputing national labs and advanced researchers.

This is no longer the case.

Costs have come down, complexity has been reduced, and off-the-shelf solutions are being built to exploit multiple processors these days.  This means that users with large compute needs — which, in an information-rich world, are becoming quite common — can now use techniques pioneered by the HPC community to solve their everyday problems.

Sure, there’s still the bleeding edge of HPC — my Grandma isn’t using a petascale computer (yet).  All the national labs and advanced researchers are still hanging out at the high-end of HPC, pushing the state of the art to get faster and bigger results that simply weren’t possible before.

Read More »

Tags: , ,

Unexpected messages = evil

June 11, 2011 at 4:25 am PST

Another term that is not-infrequently used when discussing message passing application is “unexpected messages.”

What are they, and why are they (usually) bad?

The quick definition is that an unexpected message is one that arrives before a corresponding MPI receive has been posted.  In more concrete terms: an MPI process has sent a message to a process that hadn’t yet called some flavor of MPI_RECV to receive the message.

Why is this a Bad Thing?

Read More »

Tags: , , ,

“Eager Limits”, part 2

May 31, 2011 at 7:30 am PST

Open MPI actually has multiple different protocols for sending messages — not just eager / rendezvous.

Our protocols were originally founded on the ideas described in this paper.  Many things have changed since that 2004 paper, but some of the core ideas are still the same.

The picture to the right shows how Open MPI divides an MPI message up into segments and sends them in three phases.  Open MPI’s specific definition of the “eager limit” is the max payload size that is sent with MPI match information to the receiver as the first part of the transfer.  If the entire message fits in the eager limit, no further transfers / no CTS is needed.

Read More »

Tags: , ,

What is an MPI “eager limit”?

May 28, 2011 at 7:30 am PST

Technically speaking, the MPI standard does not define anything called an “eager limit.”

An “eager limit” is term used to describe a method of sending short messages used by many MPI implementations.  That is, it’s an implementation technique — it’s not part of the MPI standard at all.  And since it’s not standardized, it also tends to be different in each MPI implementation.  More specifically: if you write your MPI code to rely on a specific implementation’s “eager limit” behavior, your code may not perform well (or may even deadlock!) with other MPI implementations.

So — what exactly is an “eager limit”?

Read More »

Tags: ,