Cisco Blogs


Cisco Blog > High Performance Computing Networking

Traffic (redux)

July 28, 2014 at 10:44 am PST

I’ve written about network traffic before (see this post and this post). It’s the subject of endless blog posts, help forums, and instructional guides across the internet.

In a High Performance Computing (HPC) context, there are some fascinating aspects about network traffic that are fairly different than other types of network traffic.
Read More »

Tags: , , ,

Process affinity: Hop on the bus, Gus!

January 10, 2014 at 5:00 am PST

Today’s blog post is written by Joshua Ladd, Open MPI developer and HPC Algorithms Engineer at Mellanox Technologies.

At some point in the process of pondering this blog post I noticed that my subconscious had, much to my annoyance, registered a snippet of the chorus to Paul Simon’s timeless classic “50 Ways to Leave Your Lover” with my brain’s internal progress thread. Seemingly, endlessly repeating, billions of times over (well, at least ten times over) the catchy hook that offers one, of presumably 50, possible ways to leave one’s lover -- “Hop on the bus, Gus.” Assuming Gus does indeed wish to extricate himself from a passionate predicament, this seems a reasonable suggestion. But, supposing Gus has a really jilted lover; his response to Mr. Simon’s exhortation might be “Just how many hops to that damn bus, Paul?”

Read More »

Tags: , , ,

Open MPI: Binding to core by default

December 18, 2013 at 12:55 pm PST

After years of discussion, the upcoming release of Open MPI 1.7.4 will change how processes are laid out (“mapped”) and bound by default.  Here’s the specifics:

  • If the number of processes is <= 2, processes will be mapped by core
  • If the number of processes is > 2, processes will be mapped by socket
  • Processes will be bound to core
  • MPI_COMM_WORLD ranks will be assigned by slot

These are all the default values — they, of course, can be changed by the user via mpirun CLI options, environment variables, etc.

Read More »

Tags: , , , , ,

EuroMPI’13 Cisco slides: Open MPI Process Affinity User Interface

September 18, 2013 at 5:17 am PST

The slides below are from my presentation at EuroMPI’13 about Open MPI’s flexible process affinity interface (in OMPI 1.7.2 and later).  I described this system in a prior blog entries (one, two, three), but many people keep asking me about it.

Josh Hursey from U. Wisconsin, LaCrosse, wrote this IMUDI paper about the interface (IMUDI is a sub-workshop of EuroMPI focusing on end-user issues) to get a little more publicity and awareness of this process affinity system.  Specifically, we designed this affinity system so that we could get feedback from real end users about what is useful and what is not.

Read More »

Tags: , , , , ,

How many network links do you have for MPI traffic?

July 19, 2013 at 5:00 am PST

If you’re a bargain basement HPC user, you might well scoff at the idea of having more than one network interface for your MPI traffic.

“I’ve got (insert your favorite high bandwidth network name here)! That’s plenty to serve all my cores! Why would I need more than that?”

I can think of (at least) three reasons off the top of my head.

Read More »

Tags: , , , ,