Cisco Blogs


Cisco Blog > High Performance Computing Networking

MPI and Java: redux

In a prior blog entry, I discussed how we are resurrecting a Java interface for MPI in the upcoming v1.7 release of Open MPI.

Some users have already experimented with this interface and found it lacking, in at least two ways:

  1. Creating datatypes of multi-dimensional arrays doesn’t work because of how Java handles them internally
  2. The interface only supports a subset of MPI-1.1 functions

These are completely valid criticisms.  And I’m incredibly thankful to the Open MPI user community for taking the time to kick the tires on this interface and give us valid feedback.

Read More »

Tags: , , ,

Cisco @SC2012

Going to Salt Lake City for Supercomputing 2012 next week?  So are we!

Be sure to drop by and see us in the Cisco booth (#2517).  I’ll be there, demonstrating and talking about our latest developments in ultra low latency Ethernet (hint: it includes 250ns port-to-port Ethernet switch latency and our latest MPI/OS-bypass technology on the Cisco Virtualized NIC in Cisco UCS servers).

In short: everyone assumes Ethernet is slow.  Everyone is wrong.

I’ll also be co-hosting the Open MPI State of the Union BOF with George Bosilca from the University of Tennessee in the Wednesday noon timeslot (room 155B).

I’ll also be one of the judges in the Student Cluster Competition.  Be sure to drop by and see the teams; they make an amazing effort every year.

Finally, this isn’t really SC-related, but Cisco will be hosting the MPI Forum meeting again in December.  Register and come join in the discussion that shapes HPC for the next 10 years.

Read More »

Tags: , , ,

Process Affinity in OMPI v1.7 (part 2)

In my last post, I described the Simple mode of Open MPI v1.7’s process affinity system.

The Simple mode is actually quite flexible, and we anticipate that it will meet most users’ needs. However, some users will need more flexibility. That’s what the Expert mode is for.

Before jumping in to the Expert mode, though, let me describe two more features of the revamped v1.7 affinity system.

Read More »

Tags: , , , , ,

Process Affinity in OMPI v1.7 (part 1)

In my last post, I mentioned that we just finished a complete revamp of the Open MPI process affinity system, and provided only a few details as to what we actually did.

I did link to a SVN commit message, but I’ll wager that few readers — if anyone — actually read it.  :-)

Much of what is in the Open MPI v1.6.x series is the same as what Ralph Castain described in a prior blog post.  I’ll describe below what we changed for the v1.7 series.

Read More »

Tags: , , , ,

Taking MPI Process Affinity to the Next Level

Process affinity is a hot topic.  With commodity servers getting more and more complex internally (think: NUMA and NUNA), placing and binding individual MPI processes to specific processor, cache, and memory resources is becoming quite important in terms of delivered application performance.

MPI implementations have long offered options for laying out MPI processes across the resources allocated for the job.  Such options typically included round-robin schemes by core or by server node.  Additionally,  MPI processes can be bound to individual processor cores (and even sockets).

Today caps a long-standing effort between Josh Hursey, Terry Dontje, Ralph Castain, and myself (all developers in the Open MPI community) to revamp the processor affinity system in Open MPI.

The first implementation of the Location Aware Mapping Algorithm (LAMA) for process mapping, binding, and ordering has been committed to the Open MPI SVN trunk.  LAMA provides a whole new level of processor affinity control to the end user.

Read More »

Tags: , , , , , ,