Cisco Blogs


Cisco Blog > High Performance Computing Networking

MPI-3 voting: results

May 31, 2012 at 5:00 am PST

Last March’s MPI Forum meeting was the last meeting to get a “formal reading” of proposals into MPI-3. Some were quite controversial. Some ended up being withdrawn before the next meeting.

This week’s Forum meeting in Japan saw the first vote (out of two) for each the surviving proposals from the March meeting (see the full voting results here). Some continued to be quite controversial. Some didn’t survive their first votes (doh!). Others narrowly survived.

Here’s a summary of some of the users-will-care-about-these proposals, and how they fared: Read More »

Tags: , ,

Open MPI v1.6 released

May 14, 2012 at 9:29 am PST

Marking the end of over 2 years of active development, the Open MPI project has released a new “stable” series of releases starting with v1.6.

Specifically, Open MPI maintains two concurrent release series:

  • Odd number releases are “feature development” releases (e.g., 1.5.x).  They’re considered to be stable and test, but not yet necessarily “mature” (i.e., have lots of real-world usage to shake out bugs).  New features are added over the life of feature development releases.
  • Even number releases are “super stable” releases (e.g., 1.6.x).  After enough time, feature development releases transition into super stable releases — the new functionality has been vetted by enough real world usage to be considered stable enough for production sites.

Conceptually, it looks like this:

Read More »

Tags: ,

The Architecture of Open Source Applications, Volume II

May 8, 2012 at 10:48 am PST

AOSA 2 book cover

It’s finally out!  The Architecture of Open Source Applications, Volume II, is now available in dead tree form (PDFs will be available for sale soon, I’m told).

Additionally, all content from the book will also be freely available on aosabook.org next week sometime (!).

But know this: all royalties from the sales of this book go to Amnesty International.  So buy a copy; it’s for a good cause.

Both volumes 1 and 2 are excellent educational material for seeing how other well-known open source applications have been architected.  What better way to learn than to see how successful, widely-used open source software packages were designed?  Even better, after you read about each package, you can go look at the source code itself to further grok the issues.

Read More »

Tags: , , ,

Polling vs. blocking message passing progress

April 20, 2012 at 6:17 am PST

Here’s a not-uncommon question that we get on the Open MPI mailing list:

Why do MPI processes consume 100% of the CPU when they’re just waiting for incoming messages?

The answer is rather straightforward: because each MPI process polls aggressively for incoming messages (as opposed to blocking and letting the OS wake it up when a new message arrives).  Most MPI implementations do this by default, actually.

The reasons why they do this is a little more complicated, but loosely speaking, one reason is that polling helps get the lowest latency possible for short messages.

Read More »

Tags: ,

EuroMPI 2012: Call for Papers

March 30, 2012 at 5:00 am PST

It’s that time of year again — time to submit EuroMPI 2012 papers!

The conference will be in Vienna, Austria on 23-26 September, 2012.  Please come join us!  It’s an excellent opportunity to hear how real-world users are actually using MPI, find out about bleeding-edge MPI-based research, and hear what the MPI Forum is up to.

Here’s the official EuroMPI 2012 CFP:

BACKGROUND AND TOPICS

EuroMPI is the preeminent meeting for users, developers and researchers to interact and discuss new developments and applications of message-passing parallel computing, in particular in and related to the Message Passing Interface (MPI). The annual meeting has a long, rich tradition, and the 19th European MPI Users’ Group Meeting will again be a lively forum for discussion of everything related to usage and implementation of MPI and other parallel programming interfaces. Traditionally, the meeting has focused on the efficient implementation of aspects of MPI, typically on high-performance computing platforms, benchmarking and tools for MPI, short-comings and extensions of MPI, parallel I/O and fault tolerance, as well as parallel applications using MPI. The meeting is open towards other topics, in particular application experience and alternative interfaces for high-performance heterogeneous, hybrid, distributed memory systems.

Read More »

Tags: ,