Cisco Blogs


Cisco Blog > High Performance Computing Networking

MPI spice

July 20, 2010
at 12:00 pm PST

Hello programmers.  Look at your code.  Now look at MPI.  Now back at your code.  Now back to MPI

Sadly, your code isn’t parallel.  But if it stopped using global variables, it could act like MPI. 

Look down.  Back up.  Where are you?  On a 64-node, 32-core parallel computation cluster, with the code your code could act like. 

What’s in your hand?  Back at me.  I have it.  It’s an iPhone with an app for that thing you love. 

Look again.  The app is now a fully-parallelized, highly-scalable MPI code.

Anything is possible when your code acts like MPI and not like Cobol.

I’m on a horse.

…ok, I’m not on a horse.

But seriously, what’s holding you back?  It’s 2010, and workstations and servers are getting more and more parallel.  Let your app take the plunge — go parallel.

The Message Passing Interface (MPI) is one of the technologies that enables parallel computing.  MPI is the specification of an API that provides discrete, typed message passing between processes.  For example, if one process sends 128 integers, the target process receives 128 integers (vs. a stream of (128*sizeof(int)) bytes). 

If you’re just starting with MPI, there’s two great tutorials at the US National Center for Supercomputing Applications (free registration required):

  • Introduction to MPI
  • Intermediate MPI

They don’t allow deep linking to the course descriptions, but you can see them on the course listing page.

And you’ll need an installation of software that implements the MPI specification to use for the tutorials.  Open MPI is a free (as in beer) implementation that is included in many operating systems (or you can download the latest version from its web site).

Make today an MPI day!

Comments Are Closed