Cisco Blogs


Cisco Blog > High Performance Computing Networking

X petaflops, where X>1

Lotsa news coming out in the ramp-up to SC.  Probably the biggest is that about China being the proud owners of the 2.5-petaflop computing monster named “Tianhe-1A”.

Congratulations to all involved!  2.5 petaflops is an enormous achievement.

Just to put this in perspective, there are only three other (publicly disclosed) machines in the world right now that have reached a petaflop: the Oak Ridge US Department of Energy (DoE) “Jaguar” machine hit 1.7 petaflops, China’s “Nebulae” hit 1.3 petaflops, and the Los Alamos US DoE “Roadrunner” machine hit 1.0 petaflops.

While petaflop-and-beyond may stay firmly in the bleeding-edge research domain for quite some time, I’m sure we’ll see more machines of this class over the next few years.   Read More »

Tags: , , ,

Do you MPI-2.2?

Open question to MPI developers: are you using the features added in MPI-2.2?

I ask because I took a little heat in the last MPI Forum meeting for not driving Open MPI to be MPI-2.2 compliant (Open MPI is MPI-2.1 compliant; there’s 4 open tickets that need to be completed for full MPI-2.2 compliance).

But I’m having a hard time finding users who want or need these specific functionalities (admittedly, they’re somewhat obscure).  We’ll definitely get to these items someday — the question is whether that someday needs to be soon or whether it can be a while from now.

Read More »

Tags: ,

Sockets, cores, and hyperthreads… oh my!

Core counts are going up.  Cisco’s C460 rack-mount server series, for example, can have up to 32 Nehalem EX cores.  As a direct result, we may well be returning to the era of running more than one MPI process per server.  This has long been true in “big iron” parallel resources, but commodity Linux HPC clusters have tended towards the one-MPI-job-per-server model in recent history.

Because of this trend, I have an open-ended question for MPI users and cluster administrators: how do you want to bind MPI processes to processors?  For example: what kinds of binding patterns do you want?  How many hyperthreads / cores / sockets do you want each process to bind to?  How do you want to specify what process binds where?  What level of granularity of control do you want / need?  (…and so on)

We are finding that every user we ask seems to have slightly different answers.  What do you think?  Let me know in the comments, below.

Read More »

Tags: , , ,

Open MPI v1.5 (and v1.4.3) released!

Open MPI logoRepresenting over a year of research, development, and testing, the Open MPI team is extremely pleased to release version v1.5.  Read the full announcement here.  Version 1.5 is chock full of new features and countless little enhancements.  We hope you’ll enjoy it!

Open MPI Version v1.4.3 was just released a few days ago, too.   It’s mainly a bug fix release that increases the stability of the time-tested v1.4 series.

Some of you may be wondering, “Why the heck would they put out a point release and then a major new revision within a few days of each other?”

How fortunate that you ask!  Let me explain…

Read More »

MPI concepts that didn’t make it

The following is an abbreviated list of my favorite concepts and/or specific functions that never made the cut into an official version of the MPI specification:
  • MPI_ESP(): The “do what I meant, not what my code says” function.  The function is intended as a hint to the MPI implementation that the executing code is likely incorrect, and the implementation should do whatever it feels that the programmer really intended it to do.
  • MPI_Encourage(): A watered-down version of MPI_Progress().
  • MPI_Alltoalltoall(): Every process sends to every other process, and then, just to be sure, everyone sends to everyone else again.  Good for benchmarks. Read More »

Tags: ,