Open MPI and the MPI-3 MPI_T interface

3 min read

Open MPI recently revamped its entire run-time parameter system (a.k.a., “MCA parameter system”) as part of its implementation effort for the “MPI_T” interface from MPI-3. The MPI_T interface is a standardized interface designed for MPI tools, but can be used by regular MPI application programs, too. Specifically, MPI_T provides programatic access to two types of […]

Why MPI is Good for You (part 3)

2 min read

I’ve previously posted on “Why MPI is Good for You” (blog tag: why-mpi-is-good-for-you).  The short version is that it hides the typical application programmer from lots and lots of underlying network stuff; stuff that they really, really don’t want to be involved in. Here’s another case study… Cisco’s upcoming ultra-low latency MPI transport is implemented […]

The History and Development of the MPI standard

1 min read

Today’s guest posting comes from Jesper Larsson Träff; he’s Faculty of Informatics, Institute of Information Systems in the Research Group for Parallel Computing at the Vienna University of Technology (TU Wien). Have you ever wondered why MPI is designed the way that it is?  The slides below are from Jesper’s talk about the History and Development of […]

MPI Quiz

1 min read

A fun scenario was proposed in the MPI Forum today.  What do you think this code will do? MPI_Comm comm, save; MPI_Request req; MPI_Init(NULL, NULL); MPI_Comm_dup(MPI_COMM_WORLD, &comm); MPI_Comm_rank(comm, &rank); save = comm; MPI_Isend(smsg, 4194304, MPI_CHAR, rank, 123, comm, &req); MPI_Comm_free(&comm); MPI_Recv(rmsg, 4194304, MPI_CHAR, rank, 123, save, MPI_STATUS_IGNORE);

Speaking about Open MPI / FOSS at Midwest Open Source Convention this weekend

1 min read

I’ve been a bit remiss about posting recently; it’s conference-paper-writing season, folks — sorry. But I thought I’d mention that I’ll be speaking at the Midwest Open Source Software Convention (MOSSCon) this weekend. I’ll be talking about my work in Open MPI, Hardware Locality (hwloc), and other open source projects, as well as Cisco’s role […]

New Addition to the Cisco MPI Team

1 min read

I’m very pleased to welcome a new member to the Cisco USNIC/MPI Team: Dave Goodell.  Welcome, Dave!  (today was his first day) Dave joins us from the MPICH team at Mathematics and Computer Science division at Argonne National Laboratory.

Latency Analogies (part 2)

2 min read

In a prior blog post, I talked about latency analogies.  I compared levels of latencies to your home, your neighborhood, a far-away neighborhood, and another city.  I talked about these localities in terms of communication. Let’s extend that analogy to talk about data locality.

I CAN HAS MPI

2 min read

The Cisco and Microsoft joint Cross-Animal Technology Project, a well-established player in the field of multi-species collaborative initiatives, is pleased to introduce its next project: a revolution in High Performance Computing (HPC): LOLCODE language bindings for the Message Passing Interface (MPI). CATP believes that cats are natural predatory programmers.  Who better to take advantage of all […]

Latency Analogies

1 min read

Multiple readers have told me that it is difficult for them to understand and/or visualize the effects of latency on their HPC applications, particularly in modern NUMA (non-uniform memory access) and NUNA (non-uniform network access) environments. Let’s breaks down the different levels of latency in a typical modern server and network computing environments.