Cisco Blogs

Cisco Blog > High Performance Computing Networking

Shared memory as an MPI transport

MPI is a great transport-agnostic inter-process communication (IPC) mechanism.  Regardless of where the peer process is that you’re trying to communicate with, MPI shuttles messages back and forth with it.

Most people think of MPI as communicating across a traditional network: Ethernet, InfiniBand, …etc.  But let’s not forget that MPI is also used between processes on the same server.

A loopback network interface could be used to communicate between them; this would present a nice abstraction to the MPI implementation — all peer processes are connected via the same networking interface (TCP sockets, OpenFabrics verbs, …etc.).

But network loopback interfaces are typically not optimized for communicating between processes on the same server (a.k.a. “loopback” communication). For example, short message latency between MPI processes — a not-unreasonable metric to measure an MPI implementation’s efficiency — may be higher than it could be with a different transport layer.

Shared memory is a fast, efficient mechanism that can be used for IPC between processes on the same server. Let’s examine the rationale for using shared memory and how it is typically used as a message transport layer.

Read More »

Tags: , ,

mpicc != mpicc

In my last post, I talked about why MPI wrapper compilers are Good for you.  The short version is that it is faaar easier to use a wrapper compiler than to force users to figure out what compiler and linker flags the MPI implementation needs — because sometimes they need a lot of flags.

Hence, MPI wrappers are Good for you.  They can save you a lot of pain.

That being said, they can also hurt portability, as one user noted on the Open MPI user’s mailing list recently.

Read More »

Tags: , , ,

Why MPI “wrapper” compilers are Good for you

An interesting thread on the Open MPI user’s mailing list came up the other day: a user wanted Open MPI’s “mpicc” wrapper compiler to accept the same command line options as MPICH’s “mpicc” wrapper.  On the surface, this is a very reasonable request.  After all, MPI is all about portability — so why not make the wrapper compilers be the same?

Unfortunately, this request exposes a can of worms and at least one unfortunate truth: the MPI API is portable, but other aspects of MPI are not, such as compiling/linking MPI applications, launching MPI jobs, etc.

Let’s first explore what wrapper compilers are, and why they’re good for you.

Read More »

Tags: , , ,

“All of life is not #MPI”

I retweeted a tweet today that may seem strange for an MPI guy.  I was echoing the sentiment that not everything in HPC revolves around MPI.

My rationale for retweeting is simple: I agree with the sentiment.

But I do want to point out that this statement has multiple levels to it.

Read More »

Tags: ,

Connection Management

One of the nice features of MPI is that its applications don’t have to worry about connection management.  There’s no concept of “open a connection to peer X” — in MPI, you just send or receive from peer X.

This is somewhat similar to many connectionless network transports (e.g., UDP) where you just send to a target address without explicitly creating a connection to that address.  But it’s different in a key way from many connectionless transports (e.g., UDP): MPI’s transport is reliable, meaning that whatever you send is guaranteed to get there.

All this magic happens under the covers of the MPI API.  It means that in some environments, MPI must manage connections for you, and also must guarantee reliable delivery.

Read More »

Tags: ,