Cisco Blogs


Cisco Blog > High Performance Computing Networking

The Common Communication Interface (CCI)

April 30, 2012 at 5:00 am PST

Today we feature part 2 of 2 in a deep-dive guest post from Scott Atchley, HPC Systems Engineer in the Technology Integration Group at Oak Ridge National Laboratory.

Given the goals described in part 1, we are developing the Common Communication Interface (CCI) as an open-source project for use by any application that needs a NAL. Note that CCI does not replace MPI since it does not provide matching or collectives. But it can be used by an MPI, probably as its NAL (likewise by a parallel file system). For applications that rely on the sockets API, it can provide improved performance when run on systems with high-performance interconnects and fall back on actual sockets when not.

How does the CCI design meet the previously-described criteria for a new NAL?

Read More »

Tags: ,

Network APIs: the good, the bad, and the ugly

April 27, 2012 at 5:00 am PST

Today we feature a deep-dive guest post from Scott Atchley, HPC Systems Engineer in the Technology Integration Group at Oak Ridge National Laboratory.  This post is part 1 of 2.

In the world of high-performance computing, we jump through hoops to extract the last bit of performance from our machines. The vast majority of processes use the Message Passing Interface (MPI) to handle communication. Each MPI implementation abstracts the underlying network away, depending on the available interconnect(s). Ideally, the interconnect offers some form of operating system (OS) bypass and remote memory access in order to provide the lowest possible latency and highest possible throughput. If not, MPI typically falls back to TCP sockets. The MPI’s network abstraction layer (NAL) then optimizes the MPI communication pattern to match that of the interconnect’s API. For similar reasons, most distributed, parallel filesystems such as Lustre, PVFS2, and GPFS, also rely on a NAL to maximize performance. Read More »

Tags: ,

Hiring Linux Kernel hackers

April 22, 2012 at 7:02 pm PST

Just in case you didn’t see my tweet: my group is hiring!

We need some Linux kernel hackers for some high-performance networking stuff.  This includes MPI and other verticals.

I believe that the official job description is still working its way through channels before it appears on the official external Cisco job-posting site, but the gist of it is Linux kernel work for Cisco x86 servers (blades and rack-mount) and NICs in high performance networking scenarios.

Are you interested?  If so, send me an email with your resume — I’m jsquyres at cisco dot com.

Tags: , ,

Polling vs. blocking message passing progress

April 20, 2012 at 6:17 am PST

Here’s a not-uncommon question that we get on the Open MPI mailing list:

Why do MPI processes consume 100% of the CPU when they’re just waiting for incoming messages?

The answer is rather straightforward: because each MPI process polls aggressively for incoming messages (as opposed to blocking and letting the OS wake it up when a new message arrives).  Most MPI implementations do this by default, actually.

The reasons why they do this is a little more complicated, but loosely speaking, one reason is that polling helps get the lowest latency possible for short messages.

Read More »

Tags: ,

EuroMPI 2012: Call for Papers

March 30, 2012 at 5:00 am PST

It’s that time of year again — time to submit EuroMPI 2012 papers!

The conference will be in Vienna, Austria on 23-26 September, 2012.  Please come join us!  It’s an excellent opportunity to hear how real-world users are actually using MPI, find out about bleeding-edge MPI-based research, and hear what the MPI Forum is up to.

Here’s the official EuroMPI 2012 CFP:

BACKGROUND AND TOPICS

EuroMPI is the preeminent meeting for users, developers and researchers to interact and discuss new developments and applications of message-passing parallel computing, in particular in and related to the Message Passing Interface (MPI). The annual meeting has a long, rich tradition, and the 19th European MPI Users’ Group Meeting will again be a lively forum for discussion of everything related to usage and implementation of MPI and other parallel programming interfaces. Traditionally, the meeting has focused on the efficient implementation of aspects of MPI, typically on high-performance computing platforms, benchmarking and tools for MPI, short-comings and extensions of MPI, parallel I/O and fault tolerance, as well as parallel applications using MPI. The meeting is open towards other topics, in particular application experience and alternative interfaces for high-performance heterogeneous, hybrid, distributed memory systems.

Read More »

Tags: ,