Cisco Blogs

Cisco Blog > High Performance Computing Networking

hwloc hits 1.0rc1

Woo hoo!  The portable hardware locality project (hwloc) has finally hit release candidate status.  Much has changed since the v0.9 series, all of it for the better.  There’s an impressive array of features and other goodness contained in the upcoming v1.0 release (if I do say so myself — although the INRIA guys did most of the heavy lifting).  Check out the release announcement, or read below the jump for an abbreviated list of the new stuff.

I don’t normally make hooplah over release candidates, but we’d actually like to get people to give this stuff a whirl before it hits v1.0 so that we can iron out any kinks.

And if you’re wondering why a high-performance networking blog cares about a server-side software project that appears to have nothing to do with networking, read some of my prior posts.  Short version: this stuff already somewhat matters for networking performance.  It’s going to matter (much) more as time goes on.

Read More »


“Free MPI downloads!”

Every once in a while, I do some kind of Google search for “MPI” (I know, hard to believe).

It amuses me how many “Free MPI download!” kinds of links show up.  All the open source MPI implementations are usually listed — Open MPI, MPICH and MPICH2, MVAPICH, etc.  These links are usually on “Software tracker” sites that purport to categorize and archive lots of free software in a convenient location from which users can download.

These links amuse me for (at least) three reasons.

Read More »

Multi / many / mucho cores

I’ve briefly mentioned before the idea of dedicating some cores for MPI communication tasks (remember: the idea of using dedicated communication co-processors isn’t new).  I thought I’d explore this in a bit more detail in today’s entry.

Two networking vendors (I can’t say the vendor names or networking technologies here because they’re competitors, but let’s just say that the technology rhymes with “schminfiniband”) recently announced products that utilize communication processing offload for MPI collective communications.  Interestingly enough, they use different approaches.  Let’s look at both.

Read More »

Open Source MPI Implementations

People periodically ask about my opinions of closed source forking from the open source project that I work on (Open MPI).  “Doesn’t it bother you that others making money off the software you wrote?” they ask.  “Aren’t they taking credit that belongs to you?”  And my personal favorite: “Don’t you worry about losing control of the Open MPI project?”

My answers to these particular questions are:

  • No.  And to be clear, I’m part of a community that wrote the software — I didn’t write (anywhere close to) all of it.
  • No, they’re not.  They’re exercising the license that we chose to use (BSD).
  • No.  There are good reasons both to extend and/or fork from our code base.

To be clear: I think that all the work — both open and closed source — surrounding the project and community that I am fortunate enough to be a part of is GREAT.

Read More »

MPI User Survey: Fun Results

Here’s some fun results that we gleaned from the MPI user community survey…

Respondents were asked how much they valued each of the following in MPI on a scale from 1=most important to 5=least important (each item could be rated individually):

  • Runtime performance (e.g., latency, bandwidth, resource consumption, etc.)
  • Feature-rich API
  • Run-time reliability
  • Scalability to large numbers of MPI processes
  • Integration with other middleware, communication protocols, etc.

The first item in the list — runtime performance — may seem silly.  After all, this is high performance computing.  Many on the Forum assumed that everyone would rank runtime performance as the most important thing.  They were wrong (!).

Read More »