Cisco Blogs


Cisco Blog > High Performance Computing Networking

MCAPI and MPI

December 9, 2011 at 11:15 am PST

From @softtalkblog, I was recently directed to an article about the Multicore Communication API (MCAPI) and MPI.  Interesting stuff.

The main sentiments expressed in the article seem quite reasonable:

  1. MCAPI plays better in the embedded space than MPI (that’s what MCAPI was designed for, after all).  Simply put: MPI is too feature-rich (read: big) for embedded environments, reflecting the different design goals of MCAPI vs. MPI.
  2. MCAPI + MPI might be a useful combination.  The article cites a few examples of using MCAPI to wrap MPI messages.  Indeed, I agree that MCAPI seems like it may be a useful transport in some environments.

One thing that puzzled me about the article, however, is that it states that MPI is terrible at moving messages around within a single server.

Huh.  That’s news to me…

Read More »

Tags: , , ,

Many Pairs of Eyes

December 1, 2011 at 7:00 am PST

Let me tell you a reason why open source and open communities are great: information sharing.

Let me explain…

I am Cisco’s representative to the Open MPI project, a middleware implementation of the Message Passing Interface (MPI) standard that facilitates big number crunching and parallel programming.  It’s a fairly large, complex code base: Ohloh says that there are 0ver 674,000 lines of code.  Open MPI is portable to a wide variety of platforms and network types.

However, supporting all the things that MPI is suppose to support and providing the same experience on every platform and network can be quite challenging.  For example, a user posted a problem to our mailing list the other day about a specific feature not working properly on OS X.

Read More »

Tags: , , , ,

The MPI C++ Bindings

October 31, 2011 at 6:06 am PST

What a strange position I find myself in: the C++ bindings have become somewhat of a divisive issue in the MPI Forum.  There are basically 3 groups in the Forum:

  1. Those who want to keep the C++ bindings deprecated.  Meaning: do not delete them, but do not add any C++ bindings for new MPI-3 functions.
  2. Those who want to un-deprecate the C++ bindings.  Meaning: add C++ bindings for all new MPI-3 functions.
  3. Those who want to delete the C++ bindings.  Meaning: kill.  Axe.  Demolish.  Remove.  Never speak of them again.

Let me explain.

Read More »

Tags: ,

Shared Receive Queues

October 25, 2011 at 5:00 am PST

In my last post, I talked about the so-called eager RDMA optimization, and its effects on resource consumption vs. latency optimization.

Let’s talk about another optimization: shared receive queues.

Shared receive queues are not a new idea, and certainly not exclusive to MPI implementations.  They’re a way for multiple senders to send to a single receiver while only consuming resources from a common pool.

Read More »

Tags: , , ,

MPI tradeoffs: space vs. time

October 22, 2011 at 7:42 am PST

@brockpalen asked me a question in Twitter:

@jsquyres [can you discuss] common #MPI implementation assumptions made for performance and/or resource constraints?

Good question.  MPI implementations are full of trade-offs between performance and resource consumption.  Let’s discuss a few easy ones.

Read More »

Tags: , ,