A while ago, Brock Palen tweeted me an MPI question: how does one send Standard Template Library (STL) C++ objects in MPI?
The problem that Brock is asking about is that STL objects tend to be variable size and type. The whole point of the STL is to create flexible, easy-to-use “containers” of arbitrary types. For example, STL lists allow you to create an arbitrary length list of a given type.
To cite a concrete example, let’s say that my application has an STL vector object named my_vector that contains a bunch of integers. What parameters do I pass to MPI_SEND to send this beast?
Read More »
Tags: HPC, mpi
Welcome to 2012! I’m finally about caught up from the Christmas holidays, last week’s travel to the MPI Forum, etc. It’s time to finally get my blogging back on.
Let’s start with a short one…
Rich Brueckner from InsideHPC interviewed me right before the Christmas break about the low Ethernet MPI latency demo that I gave at SC’11. I blogged about this stuff before, but in the slidecast that Rich posted, I provide a bit more detail about how this technology works.
Remember that this is Cisco’s 1st generation virtualized NIC; our 2nd generation is coming “soon,” and will have significantly lower MPI latency (I hate being fuzzy and not quoting the exact numbers, but the product is not yet released, so I can’t comment on it yet. I’ll post the numbers when the product is actually available).
Tags: HPC, Linux, mpi, VFIO
MPI_Bcast("Hi, this is Jeff Squyres. I'm not in the office this week. "
"I'll see your message when I return in 2012. Happy New Year!",
1, MPI_MESSAGE, MPI_COMM_WORLD);
MPI_Bcast("Beep.", 1, MPI_MESSAGE, MPI_COMM_WORLD);
MPI_RECV(your_message, 1, MPI_MESSAGE, MPI_ANY_RANK, MPI_ANY_TAG,
Tags: HPC, mpi
The upcoming January 2012 MPI Forum meeting is the last meeting to get new material into the MPI-3.0 specification.
Specifically, there are three steps to getting something into the MPI specification: a formal reading and two separate votes. Each of these three steps must happen at a separate meeting. This makes adding new material a long process… but that’s a good thing in terms of a standard. You want to be sure. You need a good amount of time of reflection and investigation before you standardize something for the next 10-20 years.
Of course, due to the deadline, we have a giant list of proposals up for a first reading in January (this is not including the 1st and 2nd votes also on the agenda). Here’s what’s on the docket so far — some are big, new things, while others are small clarifications to existing language: Read More »
Tags: HPC, mpi, MPI-3.0
After some further thought, I do believe that I was too quick to say that MPI is not a good fit for the embedded / RT space.
Yes, MPI is “large” (hundreds of functions with lots of bells and whistles). Yes, mainstream MPI is not primarily targeted towards RT environments.
But this does not mean that there have not been successful forays of MPI into this space. Two obvious ones jump to mind: Read More »
Tags: Embedded, HPC, mpi, RT