Welcome to 2012! I’m finally about caught up from the Christmas holidays, last week’s travel to the MPI Forum, etc. It’s time to finally get my blogging back on.
Let’s start with a short one…
Rich Brueckner from InsideHPC interviewed me right before the Christmas break about the low Ethernet MPI latency demo that I gave at SC’11. I blogged about this stuff before, but in the slidecast that Rich posted, I provide a bit more detail about how this technology works.
Remember that this is Cisco’s 1st generation virtualized NIC; our 2nd generation is coming “soon,” and will have significantly lower MPI latency (I hate being fuzzy and not quoting the exact numbers, but the product is not yet released, so I can’t comment on it yet. I’ll post the numbers when the product is actually available).
Tags: HPC, Linux, mpi, VFIO
MPI_Bcast("Hi, this is Jeff Squyres. I'm not in the office this week. "
"I'll see your message when I return in 2012. Happy New Year!",
1, MPI_MESSAGE, MPI_COMM_WORLD);
MPI_Bcast("Beep.", 1, MPI_MESSAGE, MPI_COMM_WORLD);
MPI_RECV(your_message, 1, MPI_MESSAGE, MPI_ANY_RANK, MPI_ANY_TAG,
Tags: HPC, mpi
The upcoming January 2012 MPI Forum meeting is the last meeting to get new material into the MPI-3.0 specification.
Specifically, there are three steps to getting something into the MPI specification: a formal reading and two separate votes. Each of these three steps must happen at a separate meeting. This makes adding new material a long process… but that’s a good thing in terms of a standard. You want to be sure. You need a good amount of time of reflection and investigation before you standardize something for the next 10-20 years.
Of course, due to the deadline, we have a giant list of proposals up for a first reading in January (this is not including the 1st and 2nd votes also on the agenda). Here’s what’s on the docket so far — some are big, new things, while others are small clarifications to existing language: Read More »
Tags: HPC, mpi, MPI-3.0
After some further thought, I do believe that I was too quick to say that MPI is not a good fit for the embedded / RT space.
Yes, MPI is “large” (hundreds of functions with lots of bells and whistles). Yes, mainstream MPI is not primarily targeted towards RT environments.
But this does not mean that there have not been successful forays of MPI into this space. Two obvious ones jump to mind: Read More »
Tags: Embedded, HPC, mpi, RT
My last blog post and MCAPI and MPI is worth some further explanation…
There were a number of good questions raised (both publicly in comments, and privately to me via email).
I ended up chatting with some MCAPI people from PolyCore Software: Sven Brehmer and Ted Gribb. We had a very interesting discussion which I won’t try to replicate here. Instead, we ended up recording an RCE-Cast today about MCAPI and MPI. It’ll be released in a few weeks (Brock already had one teed up to be released this weekend).
The main idea is that Sven and Ted were not trying to say that MCAPI is faster/better than MPI.
MCAPI is squarely aimed at a different market than MPI — the embedded market. Think: accelerators, DSPs, FPGAs, etc. And although MCAPI can be used for larger things (e.g., multiple x86-type servers on a network), there’s already well-established high-quality tools for that (e.g., MPI).
So perhaps it might be interesting to explore the realm of MPI + MCAPI in some fashion.
There’s a bunch of different forms that (MPI + MCAPI) could take — which one(s) would be useful? I cited a few forms in my prior blog post; we talked about a few more on the podcast.
But it’s hard to say without someone committing to doing some research, or a customer saying “I want this.” Talk is cheap — execution requires resources.
Would this be something that you, gentle reader, would be interested in? If so, let me know in the comments or drop me an email.
Tags: HPC, MCAPI, mpi, Multicore Association