In the January MPI Forum meeting, several proposals passed their 2nd votes, meaning that they are “in” MPI-3. That being said, MPI-3 is not yet finalized (and won’t be for many more months), so changes can still happen.
- Creating MPI_COMM_SPLIT_TYPE
- Making the C++ bindings optional
- Updating RMA (a.k.a., “one-sided”)
- Creating a new “MPIT” tools interface
I’ll describe each of these briefly below.
Read More »
Tags: HPC, MPI-3.0
Welcome to 2012! I’m finally about caught up from the Christmas holidays, last week’s travel to the MPI Forum, etc. It’s time to finally get my blogging back on.
Let’s start with a short one…
Rich Brueckner from InsideHPC interviewed me right before the Christmas break about the low Ethernet MPI latency demo that I gave at SC’11. I blogged about this stuff before, but in the slidecast that Rich posted, I provide a bit more detail about how this technology works.
Remember that this is Cisco’s 1st generation virtualized NIC; our 2nd generation is coming “soon,” and will have significantly lower MPI latency (I hate being fuzzy and not quoting the exact numbers, but the product is not yet released, so I can’t comment on it yet. I’ll post the numbers when the product is actually available).
Tags: HPC, Linux, mpi, VFIO
MPI_Bcast("Hi, this is Jeff Squyres. I'm not in the office this week. "
"I'll see your message when I return in 2012. Happy New Year!",
1, MPI_MESSAGE, MPI_COMM_WORLD);
MPI_Bcast("Beep.", 1, MPI_MESSAGE, MPI_COMM_WORLD);
MPI_RECV(your_message, 1, MPI_MESSAGE, MPI_ANY_RANK, MPI_ANY_TAG,
Tags: HPC, mpi
The upcoming January 2012 MPI Forum meeting is the last meeting to get new material into the MPI-3.0 specification.
Specifically, there are three steps to getting something into the MPI specification: a formal reading and two separate votes. Each of these three steps must happen at a separate meeting. This makes adding new material a long process… but that’s a good thing in terms of a standard. You want to be sure. You need a good amount of time of reflection and investigation before you standardize something for the next 10-20 years.
Of course, due to the deadline, we have a giant list of proposals up for a first reading in January (this is not including the 1st and 2nd votes also on the agenda). Here’s what’s on the docket so far — some are big, new things, while others are small clarifications to existing language: Read More »
Tags: HPC, mpi, MPI-3.0
After some further thought, I do believe that I was too quick to say that MPI is not a good fit for the embedded / RT space.
Yes, MPI is “large” (hundreds of functions with lots of bells and whistles). Yes, mainstream MPI is not primarily targeted towards RT environments.
But this does not mean that there have not been successful forays of MPI into this space. Two obvious ones jump to mind: Read More »
Tags: Embedded, HPC, mpi, RT