Cisco Blogs


Cisco Blog > High Performance Computing Networking

“Give me 4 255-sided die and I’ll get you some IPs”

September 29, 2010 at 12:00 pm PST

Have you ever wondered how an MPI implementation picks network paths and allocates resources?  It’s a pretty complicated (set of) issue(s), actually.

An MPI implementation must tread the fine line between performance and resource consumption.  If the implementation chooses poorly, it risks poor performance and/or the wrath of the user.  If the implementation chooses well, users won’t notice at all — they silently enjoy good performance.

It’s a thankless job, but someone’s got to do it.  :-)

Read More »

Tags: , ,

Process-to-process copy in Linux

September 16, 2010 at 12:00 pm PST

More exciting news on the Linux kernel front (thanks for the heads-up, Brice!): our friends at Big Blue have contributed a patch and started good conversation on the LKML mailing list about process-to-process copying.  We still don’t have a good solution for being notified when registered memory is freed (my last post on this topic mentioned that the ummunotify patch had hit the -mm tree, but that eventually didn’t make it up to Linus’ tree), but hey — this is progress, too (albeit in a slightly different direction), so I’ll take it!

“Why do I care?” you say.

I’m glad you asked.  Let me explain…

Read More »

It’s all about the Fortran

September 13, 2010 at 12:00 pm PST

I was reminded recently how much of today’s MPI applications are written in Fortran.  This is why we’re spending sooo much time on Fortran in the MPI-3 process (97 printed pages of Fortran material for the upcoming Stuttgart MPI Forum meeting — yowzers!).

Yes, Fortran.

(yes, I know this isn’t directly about high performance networking — but it is worth remembering that a huge number of people people use high performance networking via Fortran)

Before you laugh, remember that computer scientists/engineers don’t write the majority of the real-world codes that run on lots of today’s parallel computational resources.  Real scientists and engineers do.

Er, I mean: rocket scientists, chemists, physicists — these are the types of people who have enormous computational problems that require HPC environments to solve.  These are the people writing the codes that solve the “nature of the universe” kinds of problems.  And they write in Fortran.

Read More »

An app for that

August 27, 2010 at 12:00 pm PST

Doug Eadline wrote a cluster rant recently entitled “A Cluster in your Pocket“, talking about the possibility of “What if your cell phone could bring you real time results from a supercomputer?”

We’ve actually idly chatted about such things in the Open MPI community for a while.  It would be tremendously fun to write an iPhone/Android app that could talk to an MPI implementation and/or application.  Perhaps a good starting point would be to have the MPI implementation talk to an iPhone/Android phone.

 

Read More »

Hot Interconnects conference roundup

August 23, 2010 at 12:00 pm PST

Hot Interconnects sign on a Google bikeAs I mentioned in a few prior posts, I attended the Hot Interconnects conference last week, which happened to be hosted at the Googleplex.

Beautiful weather, interesting talks, and lively discussion are three good phrases to describe the conference. 

It’s always good to run into the same people you tend to see at these conferences and catch up on their latest work.  But it’s equally fun to talk to new people whom you’ve never met before.  Get a new perspective, hear a different way of looking at something, or even just listen to the youthful exuberance of the next generation of network researchers.

It’s all good stuff!

Read More »