Cisco Blogs


Cisco Blog > High Performance Computing Networking

Sockets, cores, and hyperthreads… oh my!

October 15, 2010 at 5:00 am PST

Core counts are going up.  Cisco’s C460 rack-mount server series, for example, can have up to 32 Nehalem EX cores.  As a direct result, we may well be returning to the era of running more than one MPI process per server.  This has long been true in “big iron” parallel resources, but commodity Linux HPC clusters have tended towards the one-MPI-job-per-server model in recent history.

Because of this trend, I have an open-ended question for MPI users and cluster administrators: how do you want to bind MPI processes to processors?  For example: what kinds of binding patterns do you want?  How many hyperthreads / cores / sockets do you want each process to bind to?  How do you want to specify what process binds where?  What level of granularity of control do you want / need?  (…and so on)

We are finding that every user we ask seems to have slightly different answers.  What do you think?  Let me know in the comments, below.

Read More »

Tags: , , ,

MPI concepts that didn’t make it

October 8, 2010 at 5:00 am PST
The following is an abbreviated list of my favorite concepts and/or specific functions that never made the cut into an official version of the MPI specification:
  • MPI_ESP(): The “do what I meant, not what my code says” function.  The function is intended as a hint to the MPI implementation that the executing code is likely incorrect, and the implementation should do whatever it feels that the programmer really intended it to do.
  • MPI_Encourage(): A watered-down version of MPI_Progress().
  • MPI_Alltoalltoall(): Every process sends to every other process, and then, just to be sure, everyone sends to everyone else again.  Good for benchmarks. Read More »

Tags: ,

“Give me 4 255-sided die and I’ll get you some IPs”

September 29, 2010 at 12:00 pm PST

Have you ever wondered how an MPI implementation picks network paths and allocates resources?  It’s a pretty complicated (set of) issue(s), actually.

An MPI implementation must tread the fine line between performance and resource consumption.  If the implementation chooses poorly, it risks poor performance and/or the wrath of the user.  If the implementation chooses well, users won’t notice at all — they silently enjoy good performance.

It’s a thankless job, but someone’s got to do it.  :-)

Read More »

Tags: , ,

Why MPI is Good for You

March 6, 2010 at 12:00 pm PST

If ever I doubted that MPI was good for the world, I think that all I would need to do is remind myself of this commit that I made into the Open MPI source code repository today.  It was a single-character change — changing a 0 to a 1.  But the commit log message was Tolstoyian in length:

  • 87 lines of text
  • 736 words
  • 4225 characters

Go ahead — read the commit message.  I double-dog dare you.

That tome of a commit message both represents several months of on-and-off work on a single bug, and details the hard-won knowledge that was required to understand why changing a 0 to a 1 fixed a bug.

Ouch.

Read More »

Tags: , ,

SGE debuts topology-aware scheduling

January 23, 2010 at 12:00 pm PST

I just ran across a great blog entry about SGE debuting topology-aware scheduling.  Dan Templeton does a great job of describing the need for processor topology-aware job scheduling within a server.  Many MPI jobs fit exactly within his description of applications that have “serious resource needs” — they typically require lots of CPU and/or network (or other I/O).  Hence, scheduling an MPI job intelligently across not only the network, but also across the network and resources inside the server, is pretty darn important.  It’s all about location, location, location!

Particularly as core counts in individual server are going up. 

Particularly as networks get more complicated inside individual servers. 

Particularly if heterogeneous computing inside a single server becomes popular.

Particularly as resources are now pretty much guaranteed to be non-uniform within an individual server.

These are exactly the reasons that, even though I’m a network middleware developer, I spend time with server-specific projects like hwloc — you really have to take a holistic approach in order to maximize performance.

Read More »

Tags: , , , ,