Cisco Blogs

Cisco Blog > High Performance Computing Networking

Stanford High Performance Computing Conference

Earlier today, I gave a talk entitled “How to Succeed in MPI without really trying” (slides: PPTX, PDF) at the Stanford High Performance Computing Conference. The audience was mostly MPI / HPC users, but with a healthy showing of IT and HPC cluster administrators.

My talk was about trying to make MPI (and parallel computing in general) just a little easier.  I tried to point out some common MPI mistakes I’ve seen people make, for example.  I also opined about how — in many cases — it’s easier to design parallelism in from the start rather than trying to graft it in to an existing application.

Read More »

Tags: , ,

Post-SC Roundup

SC’10 has now ended; in addition to being quite the cardio / leg workout from all the walking, it was a great show.  My December calendar is stacked full of meetings setup as a direct result of discussions from SC’10.  w00t.

Brock and I did our annual post-SC wrapup podcast (ok, it’s only the 2nd time we’ve done it, but it’s an “emerging tradition”).  It’s the one time a year where Brock and I are physically in the same location to do the podcast.

I’ll be on the US Thanksgiving holiday for the next week, so it’s highly unlikely that I’ll add anything here until next week sometime.  Happy Thanksgiving, everyone!

Tags: ,

Pre-SC slushies

I’m sitting in an airport on a layover while enroute to the Big Easy for #SC10 (i.e., the SuperComputing trade show, for those of you not in the know).  Love the free wifi, Charlotte airport — thanks!

Today’s post is a quickie / roundup of things right before the maelstrom of Supercomputing starts in force tomorrow night… Read More »

Tags: ,

Collaborate to Innovate

Doug Eadline recently talked about how community is tremendously important to HPC.  Two words: he’s right.  The HPC ecosystem is all about working together to advance the state of the art.  No single group, university, or company could do it alone.

As Cisco’s representative to the MPI Forum and the Open MPI software projects, I often work with teams of researchers and developers.  Sometimes all the people are in one physical place and the process of sharing ideas and dividing work is easy. But it’s much more common for me to participate in geographically scattered groups of people.  And there’s no doubt about it: collaboration across distances is just hard.  You just can’t beat having a bunch of engineers in the same room with a whiteboard when trying to figure out a complex topic. But we don’t always get that opportunity.

So how do you take a disparate group of people and make them productive?

Read More »

Tags: ,

X petaflops, where X>1

Lotsa news coming out in the ramp-up to SC.  Probably the biggest is that about China being the proud owners of the 2.5-petaflop computing monster named “Tianhe-1A”.

Congratulations to all involved!  2.5 petaflops is an enormous achievement.

Just to put this in perspective, there are only three other (publicly disclosed) machines in the world right now that have reached a petaflop: the Oak Ridge US Department of Energy (DoE) “Jaguar” machine hit 1.7 petaflops, China’s “Nebulae” hit 1.3 petaflops, and the Los Alamos US DoE “Roadrunner” machine hit 1.0 petaflops.

While petaflop-and-beyond may stay firmly in the bleeding-edge research domain for quite some time, I’m sure we’ll see more machines of this class over the next few years.   Read More »

Tags: , , ,