Cisco Blogs

Cisco Blog > High Performance Computing Networking

No RCE-cast this week

Sorry folks — Brock and I got caught up in our day jobs recently, and didn’t get to put out an RCE podcast this week.

We have some more interviews on tap, so stay tuned.  We’ll return to our regularly-scheduled every-two-weeks publication in two weeks.

Read More »

It’s the latency, stupid

…except when it isn’t.

Most people throw around latency and bandwidth numbers as the most important metrics for a given MPI implementation.  “MPI implementation X is terrible because MPI implementation Y’s latency is 5% lower!”

Ahh… the fervence of youth (and marketing).  If only the world was so black and white.  But it’s not.  The world is grey.  I can think of 20 metrics and implementation features off the top of my head that matter to real-world users and applications.

Read More »

Do you use C++? (redux)

Let’s revisit my stats from a prior blog post about who uses the MPI C++ bindings.  Jed Brown was kind enough to school me in how terrible my prior statistical analysis was.  

I’ve actually removed the offending stats from that entry and am re-doing them here; hopefully in a more meaningful way.  I won’t even describe how bad / wrong my prior analysis was; let’s just go through the numbers again with a little something I like to call The Right Way…

Read More »

hwloc 1.0 released!

At long last, we have released a stable, production-quality version of Hardware Locality (hwloc).  Yay!

If you’ve missed all my prior discussions about hwloc, hwloc provides command line tools and a C API to obtain the hierarchical map of key computing elements, such as: NUMA memory nodes, shared caches, processor sockets, processor cores, and processing units (logical processors or “threads”). hwloc also gathers various attributes such as cache and memory information, and is portable across a variety of different operating systems and platforms.

In an increasing NUMA (and NUNA!) world, hwloc is a valuable tool for high performance.

Read More »

Tags: , , , ,

Do you use the MPI C++ bindings?

Do you use the MPI C++ bindings in real-world MPI applications?

I’m not talking about using the MPI C bindings in C++ MPI applications (e.g., using MPI_Send() — a C binding).  I’m talking about writing substantial C++ MPI applications that use the MPI C++ bindings (such as MPI::Send()). 

Do you do that?  Post a comment below and let me know.

The reason that I ask is because there is some confusion in the MPI Forum as to exactly how many people use the MPI C++ bindings — and whether we should un-deprecate the MPI C++ bindings. 

Read More »