Cisco Blogs


Cisco Blog > High Performance Computing Networking

MPI for mobile devices (or not)

March 12, 2013 at 1:13 pm PST

Every once in a while, the idea pops up again: why not use all the world’s cell phones for parallel and/or distributed computations? There’s gazillions of these phones — think of the computing power!

After all, an army of ants can defeat a war horse, right?

Well… yes… and no.

Read More »

Tags: ,

MPI Forum: What’s Next?

February 28, 2013 at 11:27 pm PST

Now that we’re just starting into the MPI-3.0 era, what’s next?

The MPI Forum is still having active meetings.  What is left to do?  Isn’t MPI “done”?

Nope.  MPI is an ever-changing standard to meet the needs of HPC.  And since HPC keeps changing, so does MPI.

Read More »

Tags: , ,

Ain’t your father’s TCP

February 15, 2013 at 5:00 am PST

TCP?  Who cares about TCP in HPC?

More and more people, actually.  With the commoditization of HPC, lots of newbie HPC users are intimidated by special, one-off, traditional HPC types of networks and opt for the simplicity and universality of Ethernet.

And it turns out that TCP doesn’t suck nearly as much as most (HPC) people think, particularly on modern servers, Ethernet fabrics, and powerful Ethernet NICs.

Read More »

Tags: , , , ,

Modern GPU Integration in MPI

February 8, 2013 at 5:00 am PST

Today’s guest post is from Rolf vandeVaart, a Senior CUDA Engineer with NVIDIA.

GPUs are becoming quite popular as accelerators in High Performance Computing clusters. For example, check out Titan; a recent entry into the Top 500 list from Oak Ridge Laboratories. Titan has 18,688 nodes (299,008 CPU cores) coupled with 18,688 NVIDIA Tesla K20 GPUs.

To help ease the programming burden working with GPU memory in MPI applications, support has been added to several MPI libraries such that the MPI library can directly send and receive the GPU buffers without the user having to stage them in host memory first. This has sometimes been referred to as “CUDA-aware MPI.”

Read More »

Tags: , , ,

Process and memory affinity: why do you care?

January 31, 2013 at 5:00 am PST

I’ve written about NUMA effects and process affinity on this blog lots of times in the past.  It’s a complex topic that has a lot of real-world affects on your MPI and HPC applications.  If you’re not using processor and memory affinity, you’re likely experiencing performance degradation without even realizing it.

In short:

  1. If you’re not booting your Linux kernel in NUMA mode, you should be.
  2. If you’re not using processor affinity with your MPI/HPC applications, you should be.

Read More »

Tags: , , , ,