Cisco Blogs


Cisco Blog > High Performance Computing Networking

If not RDMA, then what?

August 6, 2011 at 7:30 am PST

In prior blog posts, I talked about some of the challenges that are associated with implementing MPI over RMA- or RDMA-based networks.  The natural question then becomes, “What’s the alternative?”

There’s at least two general classes of alternatives:

  • General purpose networks (e.g., Ethernet — perhaps using TCP/IP or even UDP)
  • Special purpose networks (i.e., built specifically for MPI)

This doesn’t even mention shared memory, but let’s return to shared memory as an MPI transport in a future post.

Read More »

Tags: , ,

Euro MPI 2011 Call for Pariticpation

July 22, 2011 at 5:00 am PST

WHAT: EuroMPI 2011 Conference
WHERE: Santorini, Greece
WHEN: September 18-21, 2011
URL: www.eurompi2011.org

BACKGROUND AND TOPICS

EuroMPI is the primary meeting where the users and developers of MPI and other message-passing programming environments can interact. The 18th European MPI Users’ Group Meeting will be a forum for the users and developers of MPI, but also welcome hybrid programing models that combine message passing with programming of modern architectures such as multi-core, or accelerators.

Through the presentation of contributed papers, poster presentations and invited talks, attendees will have the opportunity to share ideas and experiences to contribute to the improvement and furthering of message-passing and related parallel programming paradigms.

Read More »

Tags: , ,

Registered Memory (RMA / RDMA) and MPI implementations

July 20, 2011 at 5:00 am PST

In a prior blog post, I talked about RMA (and RDMA) networks, and what they mean to MPI implementations.  In this post, I’ll talk about one of the consequences of RMA networks: registered memory.

Registered memory is something that most HPC administrators and users have at least heard of, but may not fully understand.

Let me clarify it for you: registered memory is both a curse and a blessing.

It’s more of the former than the latter, if you ask me, but MPI implementations need to use (and track) registered memory to get high performance on today’s high-performance networking API stacks.

Read More »

Tags: , ,

“RDMA” — what does it mean to MPI applications?

July 16, 2011 at 8:13 am PST

RDMA standard for Remote Direct Memory Access.  The acronym is typically associated with OpenFabrics networks such as iWARP, IBoIP (a.k.a. RoCE), and InfiniBand.  But “RDMA” is typically just today’s popular flavor du jour of a more general concept: RMA (remote memory access), or directly reading and writing to a peer’s memory space.

RMA implementations (including RDMA-based networks, such as OpenFabrics) typically include one or more of the following technologies:

  1. Operating system bypass: userspace applications directly communicate with network hardware.
  2. Hardware offload: network activity is driven by the NIC, not the main CPU
  3. Hardware or software notification: when messages finish sending or are received

How are these technologies typically used in MPI implementations?

Read More »

Tags: , ,

hwloc article published in Linux Pro Magazine

July 14, 2011 at 3:41 pm PST

Brice, Samuel, and I got the crazy idea to write a magazine article about hwloc to expand its reach to people outside the HPC community. We wrote something up and submitted it to Linux Pro Magazine — and they accepted it!

I just got my copy in the mail — it’s published in the July issue: “Lessons in Locality: hwloc.”

Picture of 1st page of article

Read More »

Tags: ,