Cisco Blogs


Cisco Blog > High Performance Computing Networking

Shared memory as an MPI transport (part 2)

October 4, 2011 at 5:00 am PST

In my last post, I discussed the rationale for using shared memory as a transport between MPI processes on the same server as opposed to using traditional network API loopback mechanisms.

The two big reasons are:

  1. Short message latency is lower with shared memory.
  2. Using shared memory frees OS and network hardware resources (to include the PCI bus) for off-node communication.

Let’s discuss common ways in which MPI implementations use shared memory as a transport.

Read More »

Tags: , ,

Shared memory as an MPI transport

September 30, 2011 at 11:00 pm PST

MPI is a great transport-agnostic inter-process communication (IPC) mechanism. ┬áRegardless of where the peer process is that you’re trying to communicate with, MPI shuttles messages back and forth with it.

Most people think of MPI as communicating across a traditional network: Ethernet, InfiniBand, …etc. ┬áBut let’s not forget that MPI is also used between processes on the same server.

A loopback network interface could be used to communicate between them; this would present a nice abstraction to the MPI implementation — all peer processes are connected via the same networking interface (TCP sockets, OpenFabrics verbs, …etc.).

But network loopback interfaces are typically not optimized for communicating between processes on the same server (a.k.a. “loopback” communication). For example, short message latency between MPI processes — a not-unreasonable metric to measure an MPI implementation’s efficiency — may be higher than it could be with a different transport layer.

Shared memory is a fast, efficient mechanism that can be used for IPC between processes on the same server. Let’s examine the rationale for using shared memory and how it is typically used as a message transport layer.

Read More »

Tags: , ,