Cisco Blogs

Cisco Blog > High Performance Computing Networking

More traffic

Traffic.  I find myself still thinking about my last entry today as I’m riding the blue line CTA from O’Hare airport to downtown Chicago for the MPI Forum meeting this afternoon.  Here I am, being spirited downtown at a steady clip on a commuter train while I see thousands of gridlocked cars on one side of me, and easily flowing motor vehicles on the other.  I will definitely reach downtown before the majority of vehicles that are only a few feet away from me on the Kennedy expressway, despite the fact that I’m quite sure that I left O’Hare long after they did.

Traffic is such a great network metaphor that is gives insight into today’s ramble: it’s well-understood that network packets may be delivered in a different order than which they were sent.  What’s less understood is why.

Read More »

Tags: , ,


Traffic.  It’s a funny thing.  On my daily drive to work, I see (what appear to be) oddities and contradictions frequently.  For example, although the lanes on my side of the highway are running fast and clear, the other side is all jammed up.  But a half mile later, the other side is running fast and clear, and my lanes have been reduced to half-speed.  A short distance further, I’m zipping along again at 55mph (ahem).

Sometimes the reasons behind traffic congestion are obvious.  For example, when you drive through a busy interchange, it’s easy to understand how lots of vehicles entering and exiting the roadway can force you to slow down.  But sometimes the traffic flow issues are quite subtle; congestion may be caused by a non-obvious confluence of second- and third-order effects.

The parallels from highway traffic to networking are quite obvious, but the analogy can go much deeper when you consider that modern computational clusters span multiple different networks — we’re entering an era of Non-Uniform Network Architectures (NUNAs).

Read More »

Tags: , , , , ,

hwloc hits 1.0rc1

Woo hoo!  The portable hardware locality project (hwloc) has finally hit release candidate status.  Much has changed since the v0.9 series, all of it for the better.  There’s an impressive array of features and other goodness contained in the upcoming v1.0 release (if I do say so myself — although the INRIA guys did most of the heavy lifting).  Check out the release announcement, or read below the jump for an abbreviated list of the new stuff.

I don’t normally make hooplah over release candidates, but we’d actually like to get people to give this stuff a whirl before it hits v1.0 so that we can iron out any kinks.

And if you’re wondering why a high-performance networking blog cares about a server-side software project that appears to have nothing to do with networking, read some of my prior posts.  Short version: this stuff already somewhat matters for networking performance.  It’s going to matter (much) more as time goes on.

Read More »


“Free MPI downloads!”

Every once in a while, I do some kind of Google search for “MPI” (I know, hard to believe).

It amuses me how many “Free MPI download!” kinds of links show up.  All the open source MPI implementations are usually listed — Open MPI, MPICH and MPICH2, MVAPICH, etc.  These links are usually on “Software tracker” sites that purport to categorize and archive lots of free software in a convenient location from which users can download.

These links amuse me for (at least) three reasons.

Read More »

Multi / many / mucho cores

I’ve briefly mentioned before the idea of dedicating some cores for MPI communication tasks (remember: the idea of using dedicated communication co-processors isn’t new).  I thought I’d explore this in a bit more detail in today’s entry.

Two networking vendors (I can’t say the vendor names or networking technologies here because they’re competitors, but let’s just say that the technology rhymes with “schminfiniband”) recently announced products that utilize communication processing offload for MPI collective communications.  Interestingly enough, they use different approaches.  Let’s look at both.

Read More »