Cisco Blogs


Cisco Blog > High Performance Computing Networking

Platform Acquires HP-MPI

August 24, 2009 at 12:00 pm PST

In a move that will surely cause some head-scratching, Platform has acquired the intellectual property of the-MPI-previously-known-as-HP-MPI.The head scratching part is that Platform already owns Scali MPI. It’s no secret that they recently moved all Scali development to an engineering team based in China. Read More »

Better Linux memory tracking

August 21, 2009 at 12:00 pm PST

Yesterday morning, we (Open MPI) entered what is hopefully a final phase of testing for a “better” implementation of the “leave registered” optimization for OpenFabrics networks. I briefly mentioned this work in a prior blog entry; it’s now finally coming to fruition. Woo hoo!Roland Dreier has pushed a new Linux kernel module upstream for helping user-level applications track when memory leaves their process (it’s not guaranteed that this kernel module will be accepted, but it looks good so far). This kernel module allows MPI implementations, for example, to be alerted when registered memory is freed — a critical operation for certain optimizations and proper under-the-covers resource management.What does this mean to the average MPI application user? It means that future versions of Open MPI (and other MPI implementations) will finally have a solid, bulletproof way to implement the “leave registered” optimization for large message passing. Prior versions of this optimization required nasty, ugly, dirty Linux hacks that sometimes broke real-world applications. Boooo! The new way will not break any applications because it gets help from the underlying operating system (rather than trying to go around or hijack certain operating system functions). Yay! Read More »

SEND, ISEND, or SENDRECV…?

August 16, 2009 at 12:00 pm PST

I find that there are generally two types of MPI application programmers:

  1. Those that only use standard (“blocking”) mode sends and receives
  2. Those that use non-blocking sends and receives

The topic of whether an MPI application should use only simple standard mode sends and receives or dive into the somewhat-more-complex non-blocking modes of communication comes up not-infrequently (it just came up again on the Open MPI user’s mailing list the other day). It’s always a challenge for programmers who are new to MPI to figure out which model they should use. Recently, we came across a user who chose a third solution: use MPI_SENDRECV. Read More »

Benchmarking: the good, the bad, and the ugly

August 10, 2009 at 12:00 pm PST

Here’s a great quote that I ran across the other day from an article entitled A short history of btrfs on lwn.net by Valerie Aurora. Valerie was specifically talking about benchmarking filesystems, but you could replace the words “file systems” with just about any technology:

When it comes to file systems, it’s hard to tell truth from rumor from vile slander: the code is so complex, the personalities are so exaggerated, and the users are so angry when they lose their data. You can’t even settle things with a battle of the benchmarks: file system workloads vary so wildly that you can make a plausible argument for why any benchmark is either totally irrelevant or crucially important.

This remark is definitely true in high performance computing realms, too. Let me use it to give a little insight into MPI implementer behavior, with a specific case study from the Open MPI project. Read More »

MPI-2.2 is darn near done

August 3, 2009 at 12:00 pm PST

Torsten beat me to the punch last week (and insideHPC commented on it), but I’m still going write my $0.02 about the MPI-2.2 spec anyway.At last week’s MPI Forum meeting in Chicago (hosted at the beautiful Microsoft facility — gotta love those fruit+granola yogurt parfaits they serve!), we had the last round of 2nd votes on the MPI-2.2 specification. All changes and updates to MPI-2.1 are therefore closed. Woo hoo! All that remains is for us to actually integrate all the text that was voted on into a single, cohesive document, and then have a round of final votes at the next Forum meeting in Helsinki, Finland. These last votes in Helsinki are at least somewhat of a formality, but they do ensure that we don’t make editing mistakes in the process of transcribing all the proposals that passed into what will become the official MPI-2.2 standard document. A few MPI-2.2 proposals didn’t get resolved in time to make it into the final MPI-2.2 document (and we found at least one or two errors in the proposals that did pass into MPI-2.2), so we’ll be issuing a short MPI-2.2 errata document shortly after MPI-2.2 is published. Read More »