Open MPI powers 8 petaflops
A huge congratulations goes goes out to the RIKEN Advanced Institute for Computational Science and Fujitsu teams who saw the K supercomputer achieve over 8 petaflops in the June 2011 Top500 list, published this past week.
8 petaflops absolutely demolishes the prior record of about 2.5 petaflops. Well done!
A sharp-eyed user pointed out the fact that Open MPI was referenced in the “Programming on K Computer” Fujitsu slides (which is part of the overall SC10 Presentation Download Fujitsu site). I pinged my Fujitsu colleague on the MPI Forum, Shinji Sumimoto, to ask for a few more details — does K actually use Open MPI with some customizations for their specialized network? And did Open MPI power the 8 petaflop runs at an amazing 93% efficiency?
He answered me: yes. Here’s what he said:
It is true. Several extensions have been made because of low latency communication for point-to-point, reduction of memory consumption, and multiple network interface with specified one not trunking, etc. Some of these can be contributed to Open MPI community, however, currently we are still development phase, will be next year.
We thank Open MPI development team very much.
Question: How awesome is that?
Answer: Incredibly awesome. 8 petaflops awesome, in fact.
(to be fair, a colleague at Oracle pointed out that the SPARC chips powered the 8+ petaflops — ok, fair enough — but an efficient MPI implementation played a big part, too).