Cisco Blogs


Cisco Blog > High Performance Computing Networking

Polling vs. blocking message passing progress

April 20, 2012 at 6:17 am PST

Here’s a not-uncommon question that we get on the Open MPI mailing list:

Why do MPI processes consume 100% of the CPU when they’re just waiting for incoming messages?

The answer is rather straightforward: because each MPI process polls aggressively for incoming messages (as opposed to blocking and letting the OS wake it up when a new message arrives).  Most MPI implementations do this by default, actually.

The reasons why they do this is a little more complicated, but loosely speaking, one reason is that polling helps get the lowest latency possible for short messages.

Read More »

Tags: ,

EuroMPI 2012: Call for Papers

March 30, 2012 at 5:00 am PST

It’s that time of year again — time to submit EuroMPI 2012 papers!

The conference will be in Vienna, Austria on 23-26 September, 2012.  Please come join us!  It’s an excellent opportunity to hear how real-world users are actually using MPI, find out about bleeding-edge MPI-based research, and hear what the MPI Forum is up to.

Here’s the official EuroMPI 2012 CFP:

BACKGROUND AND TOPICS

EuroMPI is the preeminent meeting for users, developers and researchers to interact and discuss new developments and applications of message-passing parallel computing, in particular in and related to the Message Passing Interface (MPI). The annual meeting has a long, rich tradition, and the 19th European MPI Users’ Group Meeting will again be a lively forum for discussion of everything related to usage and implementation of MPI and other parallel programming interfaces. Traditionally, the meeting has focused on the efficient implementation of aspects of MPI, typically on high-performance computing platforms, benchmarking and tools for MPI, short-comings and extensions of MPI, parallel I/O and fault tolerance, as well as parallel applications using MPI. The meeting is open towards other topics, in particular application experience and alternative interfaces for high-performance heterogeneous, hybrid, distributed memory systems.

Read More »

Tags: ,

The last new things in MPI-3

March 28, 2012 at 5:15 am PST

I know we’ve been talking about new MPI-3 things for forever.  But this is the last list of new things.

I promise.

Really.

I can say this with certainly because the Forum’s March meeting was the deadline for all new proposals to make it into the MPI-3 standard.  Anything else will have to be in MPI-<next> (where <next> may be 3.1, or 4, or …11.  Shrug).

Because of the deadline, we had a blizzard of proposals finally get into shape to be presented to the entire Forum.  Let’s talk about some of the more interesting ones…

Read More »

Tags: ,

New Fortran MPI bindings are “in”! And other MPI-3 stuff…

March 26, 2012 at 8:33 am PST

As of March 7, 2012, the new “use mpi_f08″ bindings have been officially voted in to the MPI-3 standard.

Woo hoo!!

A few other minor corrections made it into MPI-3 at the same meeting, but they’re boring / not worth discussing.

What is worth discussing, however, are some proposals that passed their first (of two) formal votes to make it into MPI-3 at that same meeting:

Let’s give a few details on each of these…

Read More »

Tags: ,

Open MPI v1.5 processor affinity options

March 9, 2012 at 5:00 am PST

Today we feature a deep-dive guest post from Ralph Castain, Senior Architecture in the Advanced R&D group at Greenplum, an EMC company.

Jeff is lazy this week, so he asked that I provide some notes on the process binding options available in the Open MPI (OMPI) v1.5 release series.

First, though, a caveat. The binding options in the v1.5 series are pretty much the same as in the prior v1.4 series. However, future releases (beginning with the v1.7 series) will have significantly different options providing a broader array of controls. I won’t address those here, but will do so in a later post.

Read More »

Tags: , , , , , ,