Cisco Blogs


Cisco Blog > High Performance Computing Networking

int MPI_Vacation(short duration);

August 13, 2010 at 12:00 pm PST

The little-known “vacation” MPI function allows one to suspend the calling MPI process for brief periods of time.  The return value is an array of events that were missed while the process was inactive.  Note that the “duration” parameter is constrained to be a short value.

Programmers should also note that the returned array size tends to be large (typically regardless of the duration parameter value).  Care should be taken to ensure that enough resources are dedicated to to processing the pending events while also responding to new events in a timely manner.

Read More »

Why do different MPI’s perform differently?

July 31, 2010 at 12:00 pm PST

Sometimes my wife wonders why I have a job.  She asks me: “Aren’t you just moving bytes from point A to point B?  How hard is that?”

In some ways, she’s right — it’s not hard. Any computer users effects the act of moving bytes from point A to point B oodles of times a day.  Email, for example, is message passing — the heart of email is moving bytes from point A to point B.

But like most real-world engineering issues, it’s not quite that simple.  Indeed, if you talk to most email server administrators, they will readily launch into highly complex discussions of how delivering an email from point A to point B is an incredibly intricate, complicated process.

Read More »

Hot Interconnects evening panel

July 23, 2010 at 12:00 pm PST

The program has finally been published: I’m looking forward to being on the evening panel at the 18th Hot Interconnects conference in August.  The one-line topic for the panel is:

Stuck with Sockets: Why is the network programming interface still from the 1980s?

I’m told that it’s a good panel — a fun panel.  A panel that is deeply technical, highly opinionated, and fairly provacitive.  At least, I’m told that that’s how it’s been at the last few HOTI conferences. 

 

Read More »

MPI spice

July 20, 2010 at 12:00 pm PST

Hello programmers.  Look at your code.  Now look at MPI.  Now back at your code.  Now back to MPI

Sadly, your code isn’t parallel.  But if it stopped using global variables, it could act like MPI. 

Look down.  Back up.  Where are you?  On a 64-node, 32-core parallel computation cluster, with the code your code could act like. 

What’s in your hand?  Back at me.  I have it.  It’s an iPhone with an app for that thing you love. 

Look again.  The app is now a fully-parallelized, highly-scalable MPI code.

Anything is possible when your code acts like MPI and not like Cobol.

I’m on a horse.

Read More »

Probabiliy of correctness

July 12, 2010 at 12:00 pm PST

Pop quiz, hotshot: what happens if you run this program with 32 processes on your favorite parallel resource?  (copy-n-pasting this code to compile and run it yourself is CHEATING!)

  int buf, rank = MPI::COMM_WORLD.Get_rank();
  if (0 == rank) {
    for (int i = 1; i < MPI::COMM_WORLD.Get_size(); ++i) {
      MPI_Status status;
      MPI_Recv(&buf, 1, MPI_INT, MPI_ANY_SOURCE, 123,
               MPI_COMM_WORLD, &status);
      buf = i * 2;
      MPI_Send(&buf, 1, MPI_INT, status.MPI_SOURCE, 123,
               MPI_COMM_WORLD);
    }
  } else {
    MPI_Send(&rank, 1, MPI_INT, 0, 123, MPI_COMM_WORLD);
    MPI_Recv(&buf, 1, MPI::INT, MPI_ANY_SOURCE, 123,
             MPI_COMM_WORLD, MPI_STATUS_IGNORE);
  }

The mix of C and C++ is for brevity here on the blog.  But yea; it does compile and is a valid MPI application.

Got your answer?

Read More »