Cisco Blogs

Cisco Blog > High Performance Computing Networking

Random Tidbits

December 8, 2009 at 12:00 pm PST

Here’s some random quick notes:

  • Brock posted the MPI-3 podcast on yesterday.  Have a listen if you’d like to hear some of the new/upcoming efforts in MPI-3.
  • I saw a post on the MVAPICH list the other day that some random user picked up hwloc and submitted a patch to integrate it into MVAPICH.  Huzzah!
  • I hear quite a bit about MPI being run on the Intel prototype 40-core chip.  This is an interesting subject, but quite a bit remains to be seen about the programming models of BigCore chips.  The Intel press releases state that there is hardware support for message passing on the silicon, but what exactly does that mean?  Do we have direct access to that from user space?  …that and many other questions will be discussed over time.

Who’s going to SC10 in New Orleans next year?

Read More »

Open Resilient Cluster Manager (ORCM)

December 7, 2009 at 12:00 pm PST

Cisco announced this past weekend a new open source effort that is being launched under the Open MPI project umbrella named the Open Resilient Cluster Manager (or “OpenRCM”, or — my personal favorite — “ORCM”.  Say it 10 times fast!).

The Open MPI community is pleased to announce the establishment of a new subproject built upon the Open MPI code base. Using work initially contributed by Cisco Systems, the Open Resilient Cluster Manager is an open source project released under the Open MPI [BSD] license focused on development of an “always on” resource manager for systems spanning the range from embedded to very large clusters.

The ORCM web site neatly lays out the project goals:

  • Maintain operation of running applications in the face of single or multiple failures of any given process within that application.
  • Proactively detect incipient failures (hardware and/or software) and respond appropriately to maintain overall system operation.
  • Support both MPI and non-MPI applications.
  • Provide a research platform for exploring new concepts and methods in resilient systems.

“That’s great,” you say.  “But why on earth do we need yet another cluster resource manager?”

Read More »

MPI Forum RCE podcast recorded

November 30, 2009 at 12:00 pm PST

Just a quick note today: Brock Palen and I just recorded an interview with Bill Gropp, MPI-2.2 Chair, and Rich Graham, MPI-3.0 Chair.  Brock should be posting the podcast up on within a week or so.

Read More »

Technorati claim code

November 24, 2009 at 12:00 pm PST

Pardon the intrusion folks, I need to re-claim this blog on Technorati, so I need to publish this claim code where Technorati can find it.  I think I can delete this entry after Technorati verifies me; we’ll see…

Here is is, Technorati: GUYH9B8ZVYKR

Read More »

OpenPA v1.0.2 release

November 23, 2009 at 12:00 pm PST

EDITOR’S NOTE: As with entries about hwloc, this announcement entry is a little off the beaten track for high performance networks, but it is definitely related and relevant.

The good folks at Argonne National Labs have released OpenPA (Portable Atomics) v1.0.2.  It’s a small library that implements processor atomic operations in a portable fashion (i.e., across platforms, compilers, etc. — including inline assembly support).  Here’s a link to the release announcement and the general OpenPA web site.

While OpenPA is not directly related to high performance networking, it is highly useful to have an extremely efficient/optimized set of atomic operations when multiple threads are sharing a single resource — such as a network resource.  Hence, this companion library is quite useful in driving full utilization of common network resources.  I keep beating the same drum: as core counts are going up, little utilities like OpenPA and hwloc are going to be very, very important to extract all the performance from your server that you expect to get.

Read More »