Cisco Blogs


Cisco Blog > High Performance Computing Networking

MPI_VACATION(2011)

December 30, 2011 at 10:00 am PST
MPI_Bcast("Hi, this is Jeff Squyres.  I'm not in the office this week. "
          "I'll see your message when I return in 2012. Happy New Year!", 
          1, MPI_MESSAGE, MPI_COMM_WORLD);
MPI_Bcast("Beep.", 1, MPI_MESSAGE, MPI_COMM_WORLD);

MPI_RECV(your_message, 1, MPI_MESSAGE, MPI_ANY_RANK, MPI_ANY_TAG, 
         MPI_COMM_WORLD, &status);

Tags: ,

Lots coming up for MPI-3.0

December 23, 2011 at 9:49 am PST

The upcoming January 2012 MPI Forum meeting is the last meeting to get new material into the MPI-3.0 specification.

Specifically, there are three steps to getting something into the MPI specification: a formal reading and two separate votes.  Each of these three steps must happen at a separate meeting.  This makes adding new material a long process… but that’s a good thing in terms of a standard.  You want to be sure.  You need a good amount of time of reflection and investigation before you standardize something for the next 10-20 years.

Of course, due to the deadline, we have a giant list of proposals up for a first reading in January (this is not including the 1st and 2nd votes also on the agenda).  Here’s what’s on the docket so far — some are big, new things, while others are small clarifications to existing language: Read More »

Tags: , ,

Embedded MPI

December 16, 2011 at 8:18 am PST

After some further thought, I do believe that I was too quick to say that MPI is not a good fit for the embedded / RT space.

Yes, MPI is “large” (hundreds of functions with lots of bells and whistles).  Yes, mainstream MPI is not primarily targeted towards RT environments.

But this does not mean that there have not been successful forays of MPI into this space.  Two obvious ones jump to mind: Read More »

Tags: , , ,

MCAPI and MPI: take two

December 15, 2011 at 7:17 pm PST

My last blog post and MCAPI and MPI is worth some further explanation…

There were a number of good questions raised (both publicly in comments, and privately to me via email).

I ended up chatting with some MCAPI people from PolyCore Software: Sven Brehmer and Ted Gribb.  We had a very interesting discussion which I won’t try to replicate here.  Instead, we ended up recording an RCE-Cast today about MCAPI and MPI.  It’ll be released in a few weeks (Brock already had one teed up to be released this weekend).

The main idea is that Sven and Ted were not trying to say that MCAPI is faster/better than MPI.

MCAPI is squarely aimed at a different market than MPI — the embedded market.  Think: accelerators, DSPs, FPGAs, etc.  And although MCAPI can be used for larger things (e.g., multiple x86-type servers on a network), there’s already well-established high-quality tools for that (e.g., MPI).

So perhaps it might be interesting to explore the realm of MPI + MCAPI in some fashion.

There’s a bunch of different forms that (MPI + MCAPI) could take — which one(s) would be useful?  I cited a few forms in my prior blog post; we talked about a few more on the podcast.

But it’s hard to say without someone committing to doing some research, or a customer saying “I want this.”  Talk is cheap — execution requires resources.

Would this be something that you, gentle reader, would be interested in?  If so, let me know in the comments or drop me an email.

Tags: , , ,

MCAPI and MPI

December 9, 2011 at 11:15 am PST

From @softtalkblog, I was recently directed to an article about the Multicore Communication API (MCAPI) and MPI.  Interesting stuff.

The main sentiments expressed in the article seem quite reasonable:

  1. MCAPI plays better in the embedded space than MPI (that’s what MCAPI was designed for, after all).  Simply put: MPI is too feature-rich (read: big) for embedded environments, reflecting the different design goals of MCAPI vs. MPI.
  2. MCAPI + MPI might be a useful combination.  The article cites a few examples of using MCAPI to wrap MPI messages.  Indeed, I agree that MCAPI seems like it may be a useful transport in some environments.

One thing that puzzled me about the article, however, is that it states that MPI is terrible at moving messages around within a single server.

Huh.  That’s news to me…

Read More »

Tags: , , ,