Cisco Blogs


Cisco Blog > High Performance Computing Networking

Top 10 reasons why buffered sends are evil

February 13, 2012 at 5:00 am PST

I made an offhand remark in my last entry about how MPI buffered sends are evil.  In a comment on that entry, @brockpalen asked me why.

I gave a brief explanation in a comment reply, but the subject is enough to warrant its own blog entry.

So here it is — my top 10 reasons why MPI_BSEND (and its two variants) are evil:

  1. Buffered sends generally force an extra copy of the outgoing message (i.e., a copy from the application’s buffer to internal MPI storage).  Note that I said “generally” — an MPI implementation doesn’t have to copy.  But the MPI standard says “Thus, if a send is executed and no matching receive is posted, then MPI must buffer the outgoing message…”  Ouch.  Most implementations just always copy the message and then start processing the send. Read More »

Tags: ,

How many ways to send?

February 11, 2012 at 4:00 am PST

Pop quiz, hotshot: how many types of sends are there in MPI?

Most people will immediately think of MPI_SEND.  A few of you will remember the non-blocking variant, MPI_ISEND (where I = “immediate”).

But what about the rest — can you name them?

Here’s a hint: if I run “ls -1 *send*c | wc -l” in Open MPI’s MPI API source code directory, the result is 14.  MPI_SEND and MPI_ISEND are two of those 14.  Can you name the other 12?

Read More »

Tags: ,

Resurrecting MPI and Java

January 28, 2012 at 8:21 am PST

Back in the ’90s, there was a huge bubble of activity about Java in academic circles.  It was the new language that was going to take over the world.  An immense amount of research was produced mapping classic computer science issues into Java.

Among the projects produced were several that tried to bring MPI to Java.  That is, they added a set of Java bindings over existing C-based MPI implementations.  However, many in the HPC crowd eschewed Java for compute- or communication-heavy applications because of performance overheads inherent to the Java language and runtime implementations.

Hence, the Java+MPI=HPC efforts didn’t get too much traction.

But even though the computer science Java bubble eventually ended, Java has become quite an important language in the enterprise.  Java run-time environments, compilers, and programming models have steadily improved over the years.  Java is now commonly used for many different types of compute-heavy enterprise applications.

Read More »

Tags: , ,

How to send C++ STL objects in MPI?

January 24, 2012 at 6:45 am PST

A while ago, Brock Palen tweeted me an MPI question: how does one send Standard Template Library (STL) C++ objects in MPI?

The problem that Brock is asking about is that STL objects tend to be variable size and type.  The whole point of the STL is to create flexible, easy-to-use “containers” of arbitrary types.  For example, STL lists allow you to create an arbitrary length list of a given type.

To cite a concrete example, let’s say that my application has an STL vector object named my_vector that contains a bunch of integers.  What parameters do I pass to MPI_SEND to send this beast?

Read More »

Tags: ,

MPI + VFIO on InsideHPC slidecast

January 18, 2012 at 8:08 am PST

Welcome to 2012!  I’m finally about caught up from the Christmas holidays, last week’s travel to the MPI Forum, etc.  It’s time to finally get my blogging back on.

Let’s start with a short one…

Rich Brueckner from InsideHPC interviewed me right before the Christmas break about the low Ethernet MPI latency demo that I gave at SC’11.  I blogged about this stuff before, but in the slidecast that Rich posted, I provide a bit more detail about how this technology works.

Remember that this is Cisco’s 1st generation virtualized NIC; our 2nd generation is coming “soon,” and will have significantly lower MPI latency (I hate being fuzzy and not quoting the exact numbers, but the product is not yet released, so I can’t comment on it yet.  I’ll post the numbers when the product is actually available).

Tags: , , ,