Cisco Blogs


Cisco Blog > High Performance Computing Networking

Do you MPI-2.2?

October 26, 2010 at 4:35 pm PST

Open question to MPI developers: are you using the features added in MPI-2.2?

I ask because I took a little heat in the last MPI Forum meeting for not driving Open MPI to be MPI-2.2 compliant (Open MPI is MPI-2.1 compliant; there’s 4 open tickets that need to be completed for full MPI-2.2 compliance).

But I’m having a hard time finding users who want or need these specific functionalities (admittedly, they’re somewhat obscure).  We’ll definitely get to these items someday — the question is whether that someday needs to be soon or whether it can be a while from now.

Read More »

Tags: ,

Sockets, cores, and hyperthreads… oh my!

October 15, 2010 at 5:00 am PST

Core counts are going up.  Cisco’s C460 rack-mount server series, for example, can have up to 32 Nehalem EX cores.  As a direct result, we may well be returning to the era of running more than one MPI process per server.  This has long been true in “big iron” parallel resources, but commodity Linux HPC clusters have tended towards the one-MPI-job-per-server model in recent history.

Because of this trend, I have an open-ended question for MPI users and cluster administrators: how do you want to bind MPI processes to processors?  For example: what kinds of binding patterns do you want?  How many hyperthreads / cores / sockets do you want each process to bind to?  How do you want to specify what process binds where?  What level of granularity of control do you want / need?  (…and so on)

We are finding that every user we ask seems to have slightly different answers.  What do you think?  Let me know in the comments, below.

Read More »

Tags: , , ,

MPI concepts that didn’t make it

October 8, 2010 at 5:00 am PST
The following is an abbreviated list of my favorite concepts and/or specific functions that never made the cut into an official version of the MPI specification:
  • MPI_ESP(): The “do what I meant, not what my code says” function.  The function is intended as a hint to the MPI implementation that the executing code is likely incorrect, and the implementation should do whatever it feels that the programmer really intended it to do.
  • MPI_Encourage(): A watered-down version of MPI_Progress().
  • MPI_Alltoalltoall(): Every process sends to every other process, and then, just to be sure, everyone sends to everyone else again.  Good for benchmarks. Read More »

Tags: ,

“Give me 4 255-sided die and I’ll get you some IPs”

September 29, 2010 at 12:00 pm PST

Have you ever wondered how an MPI implementation picks network paths and allocates resources?  It’s a pretty complicated (set of) issue(s), actually.

An MPI implementation must tread the fine line between performance and resource consumption.  If the implementation chooses poorly, it risks poor performance and/or the wrath of the user.  If the implementation chooses well, users won’t notice at all — they silently enjoy good performance.

It’s a thankless job, but someone’s got to do it.  :-)

Read More »

Tags: , ,

More traffic

May 4, 2010 at 12:00 pm PST

Traffic.  I find myself still thinking about my last entry today as I’m riding the blue line CTA from O’Hare airport to downtown Chicago for the MPI Forum meeting this afternoon.  Here I am, being spirited downtown at a steady clip on a commuter train while I see thousands of gridlocked cars on one side of me, and easily flowing motor vehicles on the other.  I will definitely reach downtown before the majority of vehicles that are only a few feet away from me on the Kennedy expressway, despite the fact that I’m quite sure that I left O’Hare long after they did.

Traffic is such a great network metaphor that is gives insight into today’s ramble: it’s well-understood that network packets may be delivered in a different order than which they were sent.  What’s less understood is why.

Read More »

Tags: , ,