Cisco Blogs


Cisco Blog > High Performance Computing Networking

Building 3rd party Open MPI plugins

January 20, 2011 at 11:47 am PST

Over the past several years, multiple organizations have approached me asking how to develop their own plugins outside of the official Open MPI tree.  As a community, Open MPI hasn’t really been good about providing a good example of how to do this.

Today, I published three examples of compiling Open MPI plugins outside of the official source tree.  A Mercurial repository is freely clonable from my Bitbucket hosting:

(MOVED: See below)

This repository might get moved somewhere more official (e.g., inside Open MPI’s SVN), but for the moment, it’s an easily-publishable location for sharing with the world.

(UPDATE: the code has been moved to the main Open MPI SVN repository; look under contrib/build-mca-comps-outside-of-tree in the trunk and release branches starting with v1.4)

Read More »

Tags: , ,

Why MPI?

January 7, 2011 at 3:15 pm PST

It’s the beginning of a new year, so let’s take a step back and talk about what MPI is and why it is a Good Thing.

I’m periodically asked what exactly MPI is.  Those asking cover many different biases: network administrators, systems programmers, application programmers, web developers, server and network hardware designers, … the list goes on.  Most have typically heard about this “MPI” thing as part of “high performance computing” (HPC), and think that it’s some kind of parallel programming model.

Technically, it’s not.  MPI — or, more specifically, message passing — implies a class of parallel programming models.  But at its heart, MPI is about simplified inter-process communication (IPC).

Read More »

Tags: ,

Happy Holidays!

December 25, 2010 at 7:21 am PST

My blog always gets “slow” during late November and most of December.  The podcast suffers, too.

Here’s why…

void november_december(int year) {
    // Uses at least one week
    attend_sc();
    // Uses about another week
    thanksgiving_vacation();
    while (before_christmas()) {
        MPI_Irecv(email, 17, MPI_WORK, ..., &req[i++]);
        MPI_Isend(voicemail_reply, 1, MPI_WORK, ..., &req[i++]);
        MPI_Isend(email_reply, 2, MPI_WORK, ..., &req[i++]);
    }
    // Uses another 2 weeks
    christmas_new_years_holiday();
}

Read More »

Tags: ,

Stanford High Performance Computing Conference

December 9, 2010 at 3:18 pm PST

Earlier today, I gave a talk entitled “How to Succeed in MPI without really trying” (slides: PPTX, PDF) at the Stanford High Performance Computing Conference. The audience was mostly MPI / HPC users, but with a healthy showing of IT and HPC cluster administrators.

My talk was about trying to make MPI (and parallel computing in general) just a little easier.  I tried to point out some common MPI mistakes I’ve seen people make, for example.  I also opined about how — in many cases — it’s easier to design parallelism in from the start rather than trying to graft it in to an existing application.

Read More »

Tags: , ,

X petaflops, where X>1

October 29, 2010 at 4:49 am PST

Lotsa news coming out in the ramp-up to SC.  Probably the biggest is that about China being the proud owners of the 2.5-petaflop computing monster named “Tianhe-1A”.

Congratulations to all involved!  2.5 petaflops is an enormous achievement.

Just to put this in perspective, there are only three other (publicly disclosed) machines in the world right now that have reached a petaflop: the Oak Ridge US Department of Energy (DoE) “Jaguar” machine hit 1.7 petaflops, China’s “Nebulae” hit 1.3 petaflops, and the Los Alamos US DoE “Roadrunner” machine hit 1.0 petaflops.

While petaflop-and-beyond may stay firmly in the bleeding-edge research domain for quite some time, I’m sure we’ll see more machines of this class over the next few years.   Read More »

Tags: , , ,