Cisco Blogs

Cisco Blog > High Performance Computing Networking

Why do different MPI’s perform differently?

Sometimes my wife wonders why I have a job.  She asks me: “Aren’t you just moving bytes from point A to point B?  How hard is that?”

In some ways, she’s right — it’s not hard. Any computer users effects the act of moving bytes from point A to point B oodles of times a day.  Email, for example, is message passing — the heart of email is moving bytes from point A to point B.

But like most real-world engineering issues, it’s not quite that simple.  Indeed, if you talk to most email server administrators, they will readily launch into highly complex discussions of how delivering an email from point A to point B is an incredibly intricate, complicated process.

Read More »

Hot Interconnects evening panel

The program has finally been published: I’m looking forward to being on the evening panel at the 18th Hot Interconnects conference in August.  The one-line topic for the panel is:

Stuck with Sockets: Why is the network programming interface still from the 1980s?

I’m told that it’s a good panel — a fun panel.  A panel that is deeply technical, highly opinionated, and fairly provacitive.  At least, I’m told that that’s how it’s been at the last few HOTI conferences. 


Read More »

MPI spice

Hello programmers.  Look at your code.  Now look at MPI.  Now back at your code.  Now back to MPI

Sadly, your code isn’t parallel.  But if it stopped using global variables, it could act like MPI. 

Look down.  Back up.  Where are you?  On a 64-node, 32-core parallel computation cluster, with the code your code could act like. 

What’s in your hand?  Back at me.  I have it.  It’s an iPhone with an app for that thing you love. 

Look again.  The app is now a fully-parallelized, highly-scalable MPI code.

Anything is possible when your code acts like MPI and not like Cobol.

I’m on a horse.

Read More »

Probabiliy of correctness

Pop quiz, hotshot: what happens if you run this program with 32 processes on your favorite parallel resource?  (copy-n-pasting this code to compile and run it yourself is CHEATING!)

  int buf, rank = MPI::COMM_WORLD.Get_rank();
  if (0 == rank) {
    for (int i = 1; i < MPI::COMM_WORLD.Get_size(); ++i) {
      MPI_Status status;
      MPI_Recv(&buf, 1, MPI_INT, MPI_ANY_SOURCE, 123,
               MPI_COMM_WORLD, &status);
      buf = i * 2;
      MPI_Send(&buf, 1, MPI_INT, status.MPI_SOURCE, 123,
  } else {
    MPI_Send(&rank, 1, MPI_INT, 0, 123, MPI_COMM_WORLD);
    MPI_Recv(&buf, 1, MPI::INT, MPI_ANY_SOURCE, 123,

The mix of C and C++ is for brevity here on the blog.  But yea; it does compile and is a valid MPI application.

Got your answer?

Read More »

Seen in the blogosphere (part deux)

In my prior blog entry, I mentioned two blog articles of interest that I had read recently, and then proceeded to comment on one of them.  I’ll finally finish up and comment on the other one: Blowing the Doors of HPC Speed-up Numbers, written by Head Monkey Douglas Eadline (disclaimer: I used to write the MPI Mechanic column and Doug would edit it before it hit the shelves in the now-defunct Cluster World magazine).

Read More »