Cisco Blogs

Don’t leak MPI_Requests

- December 22, 2012 - 0 Comments

With the Mayan apocalypse safely behind us, now we can now safely discuss MPI again.

An MPI application developer came to me the other day with a potential bug in Open MPI: he noticed that Open MPI was consuming vast amounts of memory such that trying to allocate memory from his application failed.  Ouch!

It turns out, however, that the real problem was that he was never completing his MPI_Requests.  He would start non-blocking sends and receives, but then he would use some other mechanism to “know” that those sends and receives had completed.

This is a Bad Idea for many reasons, not the least of which was that he was responsible for massive memory leaks that ultimately resulted in being unable to allocate any more memory.

Here’s a short slideshow depicting why you should always complete your MPI_Requests.

Pass it on.

Leave a comment

We'd love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.