Cisco Blogs


Cisco Blog > High Performance Computing Networking

Why MPI?

It’s the beginning of a new year, so let’s take a step back and talk about what MPI is and why it is a Good Thing.

I’m periodically asked what exactly MPI is.  Those asking cover many different biases: network administrators, systems programmers, application programmers, web developers, server and network hardware designers, … the list goes on.  Most have typically heard about this “MPI” thing as part of “high performance computing” (HPC), and think that it’s some kind of parallel programming model.

Technically, it’s not.  MPI — or, more specifically, message passing — implies a class of parallel programming models.  But at its heart, MPI is about simplified inter-process communication (IPC).

Read More »

Tags: ,

Happy Holidays!

My blog always gets “slow” during late November and most of December.  The podcast suffers, too.

Here’s why…

void november_december(int year) {
    // Uses at least one week
    attend_sc();
    // Uses about another week
    thanksgiving_vacation();
    while (before_christmas()) {
        MPI_Irecv(email, 17, MPI_WORK, ..., &req[i++]);
        MPI_Isend(voicemail_reply, 1, MPI_WORK, ..., &req[i++]);
        MPI_Isend(email_reply, 2, MPI_WORK, ..., &req[i++]);
    }
    // Uses another 2 weeks
    christmas_new_years_holiday();
}

Read More »

Tags: ,

The Graph 500

Did you hear about the Graph 500 at SC’10?  You might not have.  It got some fanfare, but other press releases probably drowned it out.

Even though it’s a brand new “yet another list”, it’s worth discussing because it’s officially a Good Idea.  Here’s what Rich Murphy, Official Chief Graph 500 Cat Herder (ok, I might have made up that title), tells me about it:

Basically, what we’re trying to do is create a complementary measure to Linpack for data intensive problems.  A lot of us on the steering committee believe that these kinds of problems will dominate high performance computing over the next decade.  We’ve given some “business areas” as examples of these kinds of applications: cybersecurity, medical informatics, data enrichment, social networks, and symbolic networks.  These basically exist to support the assertion that this could be huge someday.

+1 on what he says.

Read More »

Tags: ,

Hardware Locality (hwloc) v1.1 released

I’m very pleased to announce that we just released Hardware Locality (hwloc) version 1.1.  Woo hoo!

There’s bunches of new stuff in hwloc 1.1:

  • A memory binding interface is the Big New Feature.  It’s available in both the C API and via command line options to tools such as hwloc-bind.
  • We improved lotopo’s logical vs. physical ID numbering.  Logical numbers are now all prefixed with “L#”; physical numbers are prefixed with “P#”.  That’s that, then.
  • “cpusets” are now “bitmaps”, and now have no maximum size; they’re dynamically allocated (especially for machines with huge core counts).
  • Arbitrary key=value caching is available on all objects.

…more after the break.

Read More »

Tags: , ,

Stanford High Performance Computing Conference

Earlier today, I gave a talk entitled “How to Succeed in MPI without really trying” (slides: PPTX, PDF) at the Stanford High Performance Computing Conference. The audience was mostly MPI / HPC users, but with a healthy showing of IT and HPC cluster administrators.

My talk was about trying to make MPI (and parallel computing in general) just a little easier.  I tried to point out some common MPI mistakes I’ve seen people make, for example.  I also opined about how — in many cases — it’s easier to design parallelism in from the start rather than trying to graft it in to an existing application.

Read More »

Tags: , ,