Cisco Blogs


Cisco Blog > High Performance Computing Networking

Stanford High Performance Computing Conference

December 9, 2010 at 3:18 pm PST

Earlier today, I gave a talk entitled “How to Succeed in MPI without really trying” (slides: PPTX, PDF) at the Stanford High Performance Computing Conference. The audience was mostly MPI / HPC users, but with a healthy showing of IT and HPC cluster administrators.

My talk was about trying to make MPI (and parallel computing in general) just a little easier.  I tried to point out some common MPI mistakes I’ve seen people make, for example.  I also opined about how — in many cases — it’s easier to design parallelism in from the start rather than trying to graft it in to an existing application.

Read More »

Tags: , ,

X petaflops, where X>1

October 29, 2010 at 4:49 am PST

Lotsa news coming out in the ramp-up to SC.  Probably the biggest is that about China being the proud owners of the 2.5-petaflop computing monster named “Tianhe-1A”.

Congratulations to all involved!  2.5 petaflops is an enormous achievement.

Just to put this in perspective, there are only three other (publicly disclosed) machines in the world right now that have reached a petaflop: the Oak Ridge US Department of Energy (DoE) “Jaguar” machine hit 1.7 petaflops, China’s “Nebulae” hit 1.3 petaflops, and the Los Alamos US DoE “Roadrunner” machine hit 1.0 petaflops.

While petaflop-and-beyond may stay firmly in the bleeding-edge research domain for quite some time, I’m sure we’ll see more machines of this class over the next few years.   Read More »

Tags: , , ,

Do you MPI-2.2?

October 26, 2010 at 4:35 pm PST

Open question to MPI developers: are you using the features added in MPI-2.2?

I ask because I took a little heat in the last MPI Forum meeting for not driving Open MPI to be MPI-2.2 compliant (Open MPI is MPI-2.1 compliant; there’s 4 open tickets that need to be completed for full MPI-2.2 compliance).

But I’m having a hard time finding users who want or need these specific functionalities (admittedly, they’re somewhat obscure).  We’ll definitely get to these items someday — the question is whether that someday needs to be soon or whether it can be a while from now.

Read More »

Tags: ,

Sockets, cores, and hyperthreads… oh my!

October 15, 2010 at 5:00 am PST

Core counts are going up.  Cisco’s C460 rack-mount server series, for example, can have up to 32 Nehalem EX cores.  As a direct result, we may well be returning to the era of running more than one MPI process per server.  This has long been true in “big iron” parallel resources, but commodity Linux HPC clusters have tended towards the one-MPI-job-per-server model in recent history.

Because of this trend, I have an open-ended question for MPI users and cluster administrators: how do you want to bind MPI processes to processors?  For example: what kinds of binding patterns do you want?  How many hyperthreads / cores / sockets do you want each process to bind to?  How do you want to specify what process binds where?  What level of granularity of control do you want / need?  (…and so on)

We are finding that every user we ask seems to have slightly different answers.  What do you think?  Let me know in the comments, below.

Read More »

Tags: , , ,

MPI concepts that didn’t make it

October 8, 2010 at 5:00 am PST
The following is an abbreviated list of my favorite concepts and/or specific functions that never made the cut into an official version of the MPI specification:
  • MPI_ESP(): The “do what I meant, not what my code says” function.  The function is intended as a hint to the MPI implementation that the executing code is likely incorrect, and the implementation should do whatever it feels that the programmer really intended it to do.
  • MPI_Encourage(): A watered-down version of MPI_Progress().
  • MPI_Alltoalltoall(): Every process sends to every other process, and then, just to be sure, everyone sends to everyone else again.  Good for benchmarks. Read More »

Tags: ,