Cisco Blogs


Cisco Blog > High Performance Computing Networking

Message size: big or small?

January 28, 2013 at 6:15 am PST

It’s the eternal question: should I send lots and lots of small messages, or should I glump multiple small messages into a single, bigger message?

Unfortunately, the answer is: it depends.  There’s a lot of factors in play.

Read More »

Tags: ,

MPI and Java: redux

January 18, 2013 at 5:00 am PST

In a prior blog entry, I discussed how we are resurrecting a Java interface for MPI in the upcoming v1.7 release of Open MPI.

Some users have already experimented with this interface and found it lacking, in at least two ways:

  1. Creating datatypes of multi-dimensional arrays doesn’t work because of how Java handles them internally
  2. The interface only supports a subset of MPI-1.1 functions

These are completely valid criticisms.  And I’m incredibly thankful to the Open MPI user community for taking the time to kick the tires on this interface and give us valid feedback.

Read More »

Tags: , , ,

MPI_REQUEST_FREE is Evil

January 15, 2013 at 11:06 am PST

It was pointed out to me that in my last blog post (Don’t leak MPI_Requests), I failed to mention the MPI_REQUEST_FREE function.

True enough — I did fail to mention it.  But I did so on purpose, because MPI_REQUEST_FREE is evil.

Let me explain…

Read More »

Tags: ,

Don’t leak MPI_Requests

December 22, 2012 at 8:02 am PST

With the Mayan apocalypse safely behind us, now we can now safely discuss MPI again.

An MPI application developer came to me the other day with a potential bug in Open MPI: he noticed that Open MPI was consuming vast amounts of memory such that trying to allocate memory from his application failed.  Ouch!

It turns out, however, that the real problem was that he was never completing his MPI_Requests.  He would start non-blocking sends and receives, but then he would use some other mechanism to “know” that those sends and receives had completed.

Read More »

Tags: ,

McMPI

December 10, 2012 at 8:08 am PST

Today’s guest blog entry comes from Daniel Holmes, an Applications Developers at the EPCC

I met Jeff at EuroMPI in September, and he has invited me to write a few words on my experience of developing an MPI library.

My PhD involved building a message passing library using C#; not accessing an existing MPI library from C# code but creating a brand new MPI library written entirely in pure C#. The result is McMPI (Managed-code MPI), which is compliant with MPI-1 – as far as it can be given that there are no language bindings for C# in the MPI Standard. It also has reasonably good performance in micro-benchmarks for latency and bandwidth both in shared-memory and distributed-memory.

Read More »

Tags: ,