Cisco Blogs

Cisco Blog > High Performance Computing Networking

MPI-3 Fortran Community Feedback Needed!

October 23, 2009 at 12:00 pm PST

As many of you know, I’m an active member of the MPI Forum.  We have recently completed MPI-2.2 and have shifted our sights to focus on MPI-3. 

For some inexplicable reason, I’ve become heavily involved in the MPI-3 Fortran working group.  There are some well-known problems with the MPI-2 Fortran 90 interfaces; the short version of the MPI-3 Fortran WG’s mission is to “fix those problems.” 

A great summary of what the Fortran WG is planning for MPI-3 is available on the Forum wiki page; we’d really appreciate feedback from the Fortran MPI developer community on these ideas. 

There is definitely one significant issue that we need feedback from the community before making a decision.  Craig Rasmussen from Los Alamos National Laboratory asked me to post the following “request for information” to the greater Fortran MPI developer community.  Please send feedback either via comments on this blog entry, email to me directly, or to the MPI-3 Fortran working group mailing list.

Read More »

Parallel debugging

October 22, 2009 at 12:00 pm PST

Debugging parallel applications is hard.  There’s no way around it: bugs can get infinitely more complex when you have not just one thread of control running, but rather you have N processes — each with M threads — all running simultaneously.  Printf-style debugging is simply not sufficient; when a process is running on a remote compute node, even the output from a print statement can take time to be sent across the network and then displayed on your screen — time that can mask the actual problem because it shows up significantly later than the actual problem occurred.

Tools are vital for parallel application development, and there are oodles of good ones out there.  I just wanted to highlight one really cool open source (free!) tool today called “Padb“.  Written by Ashley Pittman, it’s a small but surprisingly useful tool.  One scenario where I find Padb helpful is when an MPI job “hangs” — it just seems to stop progress, but does not die or abort.  Padb can go find all the individual MPI processes, attach to them, and generate stack traces and display variable and parameter dumps for each process in the MPI job.  This allows a developer to see where the application is hung — an important first step in the troubleshooting process.

Read More »

SC’09 Happenings

October 14, 2009 at 12:00 pm PST

Who’s going to SC’09?  I’ll be there!

I’m hosting the Open MPI Community Meeting BOF with George Bosilca from the University of Tennessee, Knoxville.  Be sure to come by to hear about where we are and where we’re going in the Open MPI project.  There’s also an MPI[-3] Forum BOF for anyone who wants to get a glimpse of where we’re going on the standards committee.  I highly recommend attending for anyone who works with MPI.

Additionally, I’ll be hanging out in the Cisco Booth (#1847); stop by and say hello!

(Editor’s note: fixed the link to the Cisco booth — thanks to Edric and others who pointed out that it was wrong!)

Read More »

GPU: HPC Friend or Foe?

October 8, 2009 at 12:00 pm PST

General purpose computing with GPUs looks like a great concept on paper.  Indeed, SC’08 was dominated by GPUs — it was impossible not to be (technically) impressed with some of the results that were being cited and shown on the exhibit floor.  But despite that, GPGPUs have failed to become a “must have” HPC technology over the past year.  Last week’s announcements from NVIDIA look really great for the HPC crowd (aside from some embarrissing PR blunders) — they seem to address many of the shortcomings of prior generation GPU usage in an HPC environment: more memory, more cores, ECC memory, better / cheaper memory management, etc.  Will GPUs become the new hotness in HPC?

The obvious question here is “Why is Jeff discussing GPUs on an MPI blog?”

Read More »

Attaining High Performance Communications: A Vertical Approach

September 30, 2009 at 12:00 pm PST

It’s finally been published! 

I wrote a chapter on MPI in the book Attaining High Performance Communications: A Vertical Approach, edited by Dr. Ada Gavrilovska from the Georgia Institute of Technology.


Book picture: Attaining High Performance Communications: A Vertical Approach

The chapter author list reads like a who’s-who in high performance computing: several of my colleagues from the MPI Forum wrote pieces of this book, as well as many bright graduate students and other noted dignitaries in HPC.

Read More »