Consider this a public service announcement: don’t forget that EuroMPI 2013 papers are due soon! EuroMPI is the place to see where the documented standard of MPI hits reality, both in terms of implementations and applications. Come talk to real
Every once in a while, the idea pops up again: why not use all the world’s cell phones for parallel and/or distributed computations? There’s gazillions of these phones — think of the computing power! After all, an army of ants can
Now that we’re just starting into the MPI-3.0 era, what’s next? The MPI Forum is still having active meetings. What is left to do? Isn’t MPI “done”? Nope. MPI is an ever-changing standard to meet the needs of HPC. And
TCP? Who cares about TCP in HPC? More and more people, actually. With the commoditization of HPC, lots of newbie HPC users are intimidated by special, one-off, traditional HPC types of networks and opt for the simplicity and universality of
Today’s guest post is from Rolf vandeVaart, a Senior CUDA Engineer with NVIDIA. GPUs are becoming quite popular as accelerators in High Performance Computing clusters. For example, check out Titan; a recent entry into the Top 500 list from Oak
I’ve written about NUMA effects and process affinity on this blog lots of times in the past. It’s a complex topic that has a lot of real-world affects on your MPI and HPC applications. If you’re not using processor and memory
It’s the eternal question: should I send lots and lots of small messages, or should I glump multiple small messages into a single, bigger message? Unfortunately, the answer is: it depends. There’s a lot of factors in play.
In a prior blog entry, I discussed how we are resurrecting a Java interface for MPI in the upcoming v1.7 release of Open MPI. Some users have already experimented with this interface and found it lacking, in at least two ways: Creating datatypes of
It was pointed out to me that in my last blog post (Don’t leak MPI_Requests), I failed to mention the MPI_REQUEST_FREE function. True enough — I did fail to mention it. But I did so on purpose, because MPI_REQUEST_FREE is evil. Let me