Social media login no longer required for comments

1 min read

A number of you complained when blogs.cisco.com switched to requiring a social medial login to leave comments. It turns out that you were not alone. Industry-wide, it seems that many people do not want to associate their personal Facebook/Twitter/etc. logins with work-related social media (i.e., this effect was seen at more than just Cisco).  The […]

EuroMPI 2013: papers due soon!

1 min read

Consider this a public service announcement: don’t forget that EuroMPI 2013 papers are due soon! EuroMPI is the place to see where the documented standard of MPI hits reality, both in terms of implementations and applications.  Come talk to real implementors, real users, and hear about state-of-the art techniques and performance optimizations.

MPI for mobile devices (or not)

2 min read

Every once in a while, the idea pops up again: why not use all the world’s cell phones for parallel and/or distributed computations? There’s gazillions of these phones — think of the computing power! After all, an army of ants can defeat a war horse, right? Well… yes… and no.

MPI Forum: What’s Next?

Now that we’re just starting into the MPI-3.0 era, what’s next? The MPI Forum is still having active meetings.  What is left to do?  Isn’t MPI “done”? Nope.  MPI is an ever-changing standard to meet the needs of HPC.  And since HPC keeps changing, so does MPI.

Ain’t your father’s TCP

TCP?  Who cares about TCP in HPC? More and more people, actually.  With the commoditization of HPC, lots of newbie HPC users are intimidated by special, one-off, traditional HPC types of networks and opt for the simplicity and universality of Ethernet. And it turns out that TCP doesn’t suck nearly as much as most (HPC) […]

Modern GPU Integration in MPI

Today’s guest post is from Rolf vandeVaart, a Senior CUDA Engineer with NVIDIA. GPUs are becoming quite popular as accelerators in High Performance Computing clusters. For example, check out Titan; a recent entry into the Top 500 list from Oak Ridge Laboratories. Titan has 18,688 nodes (299,008 CPU cores) coupled with 18,688 NVIDIA Tesla K20 […]

Process and memory affinity: why do you care?

3 min read

I’ve written about NUMA effects and process affinity on this blog lots of times in the past.  It’s a complex topic that has a lot of real-world affects on your MPI and HPC applications.  If you’re not using processor and memory affinity, you’re likely experiencing performance degradation without even realizing it. In short: If you’re not booting […]

Message size: big or small?

3 min read

It’s the eternal question: should I send lots and lots of small messages, or should I glump multiple small messages into a single, bigger message? Unfortunately, the answer is: it depends.  There’s a lot of factors in play.

MPI and Java: redux

1 min read

In a prior blog entry, I discussed how we are resurrecting a Java interface for MPI in the upcoming v1.7 release of Open MPI. Some users have already experimented with this interface and found it lacking, in at least two ways: Creating datatypes of multi-dimensional arrays doesn’t work because of how Java handles them internally […]