Cisco Blogs


Cisco Blog > High Performance Computing Networking

Resurrecting MPI and Java

January 28, 2012 at 8:21 am PST

Back in the ’90s, there was a huge bubble of activity about Java in academic circles.  It was the new language that was going to take over the world.  An immense amount of research was produced mapping classic computer science issues into Java.

Among the projects produced were several that tried to bring MPI to Java.  That is, they added a set of Java bindings over existing C-based MPI implementations.  However, many in the HPC crowd eschewed Java for compute- or communication-heavy applications because of performance overheads inherent to the Java language and runtime implementations.

Hence, the Java+MPI=HPC efforts didn’t get too much traction.

But even though the computer science Java bubble eventually ended, Java has become quite an important language in the enterprise.  Java run-time environments, compilers, and programming models have steadily improved over the years.  Java is now commonly used for many different types of compute-heavy enterprise applications.

Read More »

Tags: , ,

How to send C++ STL objects in MPI?

January 24, 2012 at 6:45 am PST

A while ago, Brock Palen tweeted me an MPI question: how does one send Standard Template Library (STL) C++ objects in MPI?

The problem that Brock is asking about is that STL objects tend to be variable size and type.  The whole point of the STL is to create flexible, easy-to-use “containers” of arbitrary types.  For example, STL lists allow you to create an arbitrary length list of a given type.

To cite a concrete example, let’s say that my application has an STL vector object named my_vector that contains a bunch of integers.  What parameters do I pass to MPI_SEND to send this beast?

Read More »

Tags: ,

Recently Voted into MPI-3

January 23, 2012 at 7:03 am PST

In the January MPI Forum meeting, several proposals passed their 2nd votes, meaning that they are “in” MPI-3.  That being said, MPI-3 is not yet finalized (and won’t be for many more months), so changes can still happen.

  • Creating MPI_COMM_SPLIT_TYPE
  • Making the C++ bindings optional
  • Updating RMA (a.k.a., “one-sided”)
  • Creating a new “MPIT” tools interface

I’ll describe each of these briefly below.

Read More »

Tags: ,

MPI + VFIO on InsideHPC slidecast

January 18, 2012 at 8:08 am PST

Welcome to 2012!  I’m finally about caught up from the Christmas holidays, last week’s travel to the MPI Forum, etc.  It’s time to finally get my blogging back on.

Let’s start with a short one…

Rich Brueckner from InsideHPC interviewed me right before the Christmas break about the low Ethernet MPI latency demo that I gave at SC’11.  I blogged about this stuff before, but in the slidecast that Rich posted, I provide a bit more detail about how this technology works.

Remember that this is Cisco’s 1st generation virtualized NIC; our 2nd generation is coming “soon,” and will have significantly lower MPI latency (I hate being fuzzy and not quoting the exact numbers, but the product is not yet released, so I can’t comment on it yet.  I’ll post the numbers when the product is actually available).

Tags: , , ,

MPI_VACATION(2011)

December 30, 2011 at 10:00 am PST
MPI_Bcast("Hi, this is Jeff Squyres.  I'm not in the office this week. "
          "I'll see your message when I return in 2012. Happy New Year!", 
          1, MPI_MESSAGE, MPI_COMM_WORLD);
MPI_Bcast("Beep.", 1, MPI_MESSAGE, MPI_COMM_WORLD);

MPI_RECV(your_message, 1, MPI_MESSAGE, MPI_ANY_RANK, MPI_ANY_TAG, 
         MPI_COMM_WORLD, &status);

Tags: ,