Cisco Blogs


Cisco Blog > High Performance Computing Networking

It’s the latency, stupid

May 28, 2010 at 12:00 pm PST

…except when it isn’t.

Most people throw around latency and bandwidth numbers as the most important metrics for a given MPI implementation.  “MPI implementation X is terrible because MPI implementation Y’s latency is 5% lower!”

Ahh… the fervence of youth (and marketing).  If only the world was so black and white.  But it’s not.  The world is grey.  I can think of 20 metrics and implementation features off the top of my head that matter to real-world users and applications.

Read More »

Do you use C++? (redux)

May 20, 2010 at 12:00 pm PST

Let’s revisit my stats from a prior blog post about who uses the MPI C++ bindings.  Jed Brown was kind enough to school me in how terrible my prior statistical analysis was.  

I’ve actually removed the offending stats from that entry and am re-doing them here; hopefully in a more meaningful way.  I won’t even describe how bad / wrong my prior analysis was; let’s just go through the numbers again with a little something I like to call The Right Way…

Read More »

hwloc 1.0 released!

May 18, 2010 at 12:00 pm PST

At long last, we have released a stable, production-quality version of Hardware Locality (hwloc).  Yay!

If you’ve missed all my prior discussions about hwloc, hwloc provides command line tools and a C API to obtain the hierarchical map of key computing elements, such as: NUMA memory nodes, shared caches, processor sockets, processor cores, and processing units (logical processors or “threads”). hwloc also gathers various attributes such as cache and memory information, and is portable across a variety of different operating systems and platforms.

In an increasing NUMA (and NUNA!) world, hwloc is a valuable tool for high performance.

Read More »

Tags: , , , ,

Do you use the MPI C++ bindings?

May 15, 2010 at 12:00 pm PST

Do you use the MPI C++ bindings in real-world MPI applications?

I’m not talking about using the MPI C bindings in C++ MPI applications (e.g., using MPI_Send() — a C binding).  I’m talking about writing substantial C++ MPI applications that use the MPI C++ bindings (such as MPI::Send()). 

Do you do that?  Post a comment below and let me know.

The reason that I ask is because there is some confusion in the MPI Forum as to exactly how many people use the MPI C++ bindings — and whether we should un-deprecate the MPI C++ bindings. 

Read More »

ummunotify hits the -mm kernel tree

May 11, 2010 at 12:00 pm PST

The “ummunotify” functionality was been added to the “-mm” Linux kernel tree yesterday.

/me does a happy dance

Granted, getting into the -mm tree doesn’t guarantee anything about getting into Linus’ tree.  But it’s definitely steps in the right direction.

Let me tell you why this is a Big Deal: memory management of networks based on OS-bypass techniques are a nightmare.  Ummunotify makes it slightly less of a nightmare.  This is good for MPI implementations and good for real-world MPI applications.

Read More »