Cisco Blogs


Cisco Blog > High Performance Computing Networking

OpenPA v1.0.2 release

November 23, 2009 at 12:00 pm PST

EDITOR’S NOTE: As with entries about hwloc, this announcement entry is a little off the beaten track for high performance networks, but it is definitely related and relevant.

The good folks at Argonne National Labs have released OpenPA (Portable Atomics) v1.0.2.  It’s a small library that implements processor atomic operations in a portable fashion (i.e., across platforms, compilers, etc. — including inline assembly support).  Here’s a link to the release announcement and the general OpenPA web site.

While OpenPA is not directly related to high performance networking, it is highly useful to have an extremely efficient/optimized set of atomic operations when multiple threads are sharing a single resource — such as a network resource.  Hence, this companion library is quite useful in driving full utilization of common network resources.  I keep beating the same drum: as core counts are going up, little utilities like OpenPA and hwloc are going to be very, very important to extract all the performance from your server that you expect to get.

Read More »

More MPI Forum feedback needed

November 20, 2009 at 12:00 pm PST

First the Fortran WG asked for some specific guidance (thank you very much for all who replied!), now the main Forum itself is conducting a community-wide survey to solicit feedback to help shape the MPI-3 standards process.  To protect from spam, the survey requires a password: mpi3.

In this survey, the MPI Forum is asking as many people as possible for feedback on the MPI-3 process — what features to include, what features to not include, etc.

We encourage you to forward this survey on to as many interested and relevant parties as possible.

It will take approximately 10 minutes to complete the questionnaire.

Read More »

Come see us at SC09!

November 16, 2009 at 12:00 pm PST

I have nothing deep to say for this week’s blog entry since I’m sitting here in the Portland convention center feverishly working to finish my SC09 slides.  My partner in Fortran crime, Craig Rasmussen, is sitting next to me, feverishly working on our prototype Fortran 2003 MPI bindings implementation so that we can hand out proof-of-concept tarballs at the MPI Forum BOF on Wednesday evening.

All in all — it’s a normal beginning to Supercomputing.  Wink

The #SC09 twitter feed is going crazy with about 6 billion tweets.  Just make sure you use the patented SC09 Fist Bump when in Portland.

Also be sure to drop by and see me in the Cisco Booth (#1847 — get a Cisco t-shirt!).  I’ll be walking around the floor for the Gala opening, but I have booth duty most mornings this week.  I’ll also be at the Open MPI BOF on Wednesday at 12:15pm and the MPI Forum BOF, also on Wednesday, but at 5:30pm.

Read More »

hwloc v0.9.2 released

November 5, 2009 at 12:00 pm PST

It took a bunch of testing, but we finally got the first formal public release of hwloc (“Hardware Locality”) out the door.  From the announcement:

“hwloc provides command line tools and a C API to obtain the hierarchical map of key computing elements, such as: NUMA memory nodes, shared caches, processor sockets, processor cores, and processor “threads”. hwloc also gathers various attributes such as cache and memory information, and is portable across a variety of different operating systems and platforms.”

hwloc was primarily developed with High Performance Computing (HPC) applications in mind, but it is generally applicable to any software that wants or needs to know the physical layout of the machine on which it is running.  This is becoming increasingly important in today’s ever-growing-core-count compute servers.

Read More »

Other MPI-3 Forum activities

October 29, 2009 at 12:00 pm PST

Since there were a goodly number of comments on the MPI-3 Fortran question from the other day (please keep spreading that post around — the most feedback we get, the better!), I thought I’d give a quick synopsis of the other MPI-3 Forum Working Groups.  That is just to let you know that there’s more going on in MPI-3 than just new yummy Fortran goodness!

The links below go to the wiki pages of the various working groups (WG).  Some wiki pages are more active than others; some wiki pages are fairly dormant, but that doesn’t necessarily mean that the WG itself is dormant.  Some WG’s simply choose to communicate more via email and/or regular teleconferences.  For example, the Tools WG has only sporatic emails on its mailing list, but it has a regularly-updated wiki and regular teleconferences + meeting times during the bi-monthly MPI Forum meetings.  Hence, each WG may work and communicate differently than its peers.

Read More »