Cisco Blogs

Cisco Blog > High Performance Computing Networking

MPI concepts that didn’t make it

The following is an abbreviated list of my favorite concepts and/or specific functions that never made the cut into an official version of the MPI specification:
  • MPI_ESP(): The “do what I meant, not what my code says” function.  The function is intended as a hint to the MPI implementation that the executing code is likely incorrect, and the implementation should do whatever it feels that the programmer really intended it to do.
  • MPI_Encourage(): A watered-down version of MPI_Progress().
  • MPI_Alltoalltoall(): Every process sends to every other process, and then, just to be sure, everyone sends to everyone else again.  Good for benchmarks. Read More »

Tags: ,

New blog site!

Just a digression from the normal technical talk here…  We finally launched the new Cisco blogs site.  Woo hoo!!

Most importantly, I wanted to let you all know that the landing page and RSS feed URLs have both changed.  There are HTTP redirects in place for both (which I noticed this morning caused a bunch of old RSS entries to be marked as “new” — oops), but just in case you need to know them:

You don’t need to update your bookmarks / RSS readers, but you might want to anyway just because all the cool kids are doing it.

Read More »

Tags: , ,

“Give me 4 255-sided die and I’ll get you some IPs”

Have you ever wondered how an MPI implementation picks network paths and allocates resources?  It’s a pretty complicated (set of) issue(s), actually.

An MPI implementation must tread the fine line between performance and resource consumption.  If the implementation chooses poorly, it risks poor performance and/or the wrath of the user.  If the implementation chooses well, users won’t notice at all — they silently enjoy good performance.

It’s a thankless job, but someone’s got to do it.  :-)

Read More »

Tags: , ,

Process-to-process copy in Linux

More exciting news on the Linux kernel front (thanks for the heads-up, Brice!): our friends at Big Blue have contributed a patch and started good conversation on the LKML mailing list about process-to-process copying.  We still don’t have a good solution for being notified when registered memory is freed (my last post on this topic mentioned that the ummunotify patch had hit the -mm tree, but that eventually didn’t make it up to Linus’ tree), but hey — this is progress, too (albeit in a slightly different direction), so I’ll take it!

“Why do I care?” you say.

I’m glad you asked.  Let me explain…

Read More »

It’s all about the Fortran

I was reminded recently how much of today’s MPI applications are written in Fortran.  This is why we’re spending sooo much time on Fortran in the MPI-3 process (97 printed pages of Fortran material for the upcoming Stuttgart MPI Forum meeting — yowzers!).

Yes, Fortran.

(yes, I know this isn’t directly about high performance networking — but it is worth remembering that a huge number of people people use high performance networking via Fortran)

Before you laugh, remember that computer scientists/engineers don’t write the majority of the real-world codes that run on lots of today’s parallel computational resources.  Real scientists and engineers do.

Er, I mean: rocket scientists, chemists, physicists — these are the types of people who have enormous computational problems that require HPC environments to solve.  These are the people writing the codes that solve the “nature of the universe” kinds of problems.  And they write in Fortran.

Read More »