Cisco Blogs


Cisco Blog > High Performance Computing Networking

MPI-3 standard available in hardcover

November 10, 2012 at 2:34 pm PST

The MPI-3.0 standard is now available in hardcover (it’s green!).  The book is available for cost by Dr. Rolf Rabenseifner at HLRS; no profit is being made by these sales.  Here’s an excerpt from Rolf’s original announcement:

As a service (at costs) for users of the Message Passing Interface,  HLRS has printed the new Standard, Version 3.0 (852 pages)  in hardcover. The price is only 19.50 Euro or 25 US-$.

The book is available through the HLRS web site.

Read More »

Tags: , ,

Cisco @SC2012

November 9, 2012 at 6:24 am PST

Going to Salt Lake City for Supercomputing 2012 next week?  So are we!

Be sure to drop by and see us in the Cisco booth (#2517).  I’ll be there, demonstrating and talking about our latest developments in ultra low latency Ethernet (hint: it includes 250ns port-to-port Ethernet switch latency and our latest MPI/OS-bypass technology on the Cisco Virtualized NIC in Cisco UCS servers).

In short: everyone assumes Ethernet is slow.  Everyone is wrong.

I’ll also be co-hosting the Open MPI State of the Union BOF with George Bosilca from the University of Tennessee in the Wednesday noon timeslot (room 155B).

I’ll also be one of the judges in the Student Cluster Competition.  Be sure to drop by and see the teams; they make an amazing effort every year.

Finally, this isn’t really SC-related, but Cisco will be hosting the MPI Forum meeting again in December.  Register and come join in the discussion that shapes HPC for the next 10 years.

Read More »

Tags: , , ,

Why MPI is Good for You (part 2)

October 28, 2012 at 6:00 am PST

A while ago, I posted “Why MPI is Good For You,” describing a one-byte change in Open MPI’s code base that fixed an incredibly subtle IPv6-based bug.

The point of that blog entry was that MPI represents an excellent layered design: it lets application developers focus on their applications while shielding them from all the complex wilderbeasts that roam under the covers in the implementation.

MPI implementors like me don’t know — and don’t really want to know — anything about complex numerical analysis, protein folding, seismic wave propagation, or any one of a hundred other HPC application areas.  And I’m assuming that MPI application developers don’t know — and don’t want to know — about the tricky underpinnings of how modern MPI implementations work.

Today, I present another motivating example for this thesis.

Read More »

Tags: , ,

The MPI C++ bindings are gone: what does it mean to you?

October 19, 2012 at 5:00 am PST

Jeff Hammond at Argonne tells me that there’s some confusion in the user community about MPI and C++.  I explained how/why we got here in my first post; let Jeff (Hammond) and I now explain what this means to you.

The short version is: DON’T PANIC.

MPI implementations that provided the C++ bindings will likely continue to do so for quite a while.  I know that we have no intention of removing them from Open MPI any time soon, for example.  The MPICH guys have told me the same.

I’ll discuss below what this means to both applications that are written in C++, and applications that use the MPI C++ bindings. Read More »

Tags: , ,

The MPI C++ bindings: what happened, and why?

October 16, 2012 at 5:00 am PST

Jeff Hammond at Argonne tells me that there’s some confusion in the user community about MPI and C++.

Let me see if I can clear up some of the issues.

In this blog entry, I’ll describe what has happened to the C++ bindings over time (up to and including their removal in MPI-3), and why.  In a second blog entry, I’ll describe what this means to real-world C++ MPI applications.

Read More »

Tags: , ,