Cisco Logo


High Performance Computing Networking

If ever I doubted that MPI was good for the world, I think that all I would need to do is remind myself of this commit that I made into the Open MPI source code repository today.  It was a single-character change — changing a 0 to a 1.  But the commit log message was Tolstoyian in length:

Go ahead — read the commit message.  I double-dog dare you.

That tome of a commit message both represents several months of on-and-off work on a single bug, and details the hard-won knowledge that was required to understand why changing a 0 to a 1 fixed a bug.

Ouch.

Long commit message notwithstanding, it does bring forth a good point: writing robust, portable network code is hard.  It’s not a job for the meek.   But more importantly, nor should it be the job of those who simply need to use the network with a minimum of fuss and muss.

To be clear: chemical engineers don’t care a whit whether the underlying network is shared memory, one of several flavors of Ethernet, Myrinet, InfiniBand, …or one of a dozen other network types.  They also don’t care at all what the IPV6_V6ONLY flag does on different operating systems.  They just want to send their data between processing entities and concentrate on the computational problem that they’re trying to solve.

Good, well-designed, and well-implemented middleware can hide most networking complexity from application developers.  Application developers should be free to concentrate on their application — not the itty bitty details of underlying network quirks.

I’m not saying that middleware should be a silver bullet for any kind of network access patterns; indeed, the best performing applications are designed and written with network traversals in mind (more specifically: if you don’t design and write your application with distributed data locality in mind, you’re in for a world of performance hurt).  What I’m saying is that when we can hide details, we should.

MPI does a pretty darn good job of this.  Compare the number of lines of code in a trivial ping-pong MPI application to a comparable application over sockets — or any network type, for that matter.  The MPI code wins in simplicity, hands down.  Sure, you can write incredibly complex applications with MPI.  But try writing that same application in sockets — or shared memory — or sockets and shared memory.  Yow.

MPI is good.  MPI works.

Comments Are Closed

  1. Return to Countries/Regions
  2. Return to Home
  1. All High Performance Computing Networking
  2. Return to Home