Cisco Blogs
Voice Search is currently unavailable
Powered by Google Web Speech API
We didn't hear that. Try again.
When autocomplete results are available use up and down arrows to review and enter to select
Avatar

Jeff Squyres

The MPI Guy

UCS Platform Software

Dr. Jeff Squyres is Cisco's representative to the MPI Forum standards body and is Cisco's core software developer in the open source Open MPI project. He has worked in the High Performance Computing (HPC) field since his early graduate-student days in the mid-1990's, and is a chapter author of the MPI-2 and MPI-3 standards.

Jeff received both a BS in Computer Engineering and a BA in English Literature from the University of Notre Dame in 1994; he received a MS in Computer Science and Engineering from Notre Dame two years later in 1996. After some active duty tours in the military, Jeff received his Ph.D. in Computer Science and Engineering from Notre Dame in 2004. Jeff then worked as a Post-Doctoral research associate at Indiana University, until he joined Cisco in 2006.

In Cisco, Jeff is part of the VIC group (Virtual Interface Card, Cisco's virtualized server NIC) in the larger UCS server group. He works in designing and writing systems-level software for optimized network IO in HPC and other high-performance types of applications. Jeff also represents Cisco to several open source software communities and the MPI Forum standards body.

Articles

EuroMPI’13 Cisco slides: Open MPI Process Affinity User Interface

1 min read

The slides below are from my presentation at EuroMPI’13 about Open MPI’s flexible process affinity interface (in OMPI 1.7.2 and later).  I described this system in a prior blog entries (one, two, three), but many people keep asking me about it. Josh Hursey from U. Wisconsin, LaCrosse, wrote this IMUDI paper about the interface (IMUDI […]

EuroMPI’13 Cisco slides: UCS, Nexus, usNIC

1 min read

A few people asked me to post the slides that I just presented in the Cisco vendor session at EuroMPI’13.  In short, I gave a brief overview of our servers and switches, and then some technical details of how we use SR-IOV in our usNIC, etc. Here’s the slides:

MPI newbie: Building MPI applications

4 min read

In a previous post, I gave some (very) general requirements for how to setup / install an MPI installation. This is post #2 in the series: now that you’ve got a shiny new computational cluster, and you’ve got one or more MPI implementations installed, I’ll talk about how to build, compile, and link applications that […]

MPI newbie: Requirements and installation of an MPI

4 min read

I often get questions from those who are just starting with MPI; they want to know common things such as: How to install / setup an MPI implementation How to compile their MPI applications How to run their MPI applications How to learn more about MPI This will be the first blog entry of several […]

Ultra low latency Ethernet (UCS “usNIC”): questions and answers

4 min read

I have previously written a few details about our upcoming ultra low latency solution for High Performance Computing (HPC).  Since my last blog post, a few of you sent me emails asking for more technical details about it. So let’s just put it all out there.

Short message latency and NUMA effects

2 min read

I’ve previously written a bunch about the effects of location, Location, LOCATION! on MPI applications. Here’s another subtle NUMA effect that a well-tuned MPI implementation can hide from you: intelligently distributing traffic between multiple network interfaces. Yeah, yeah, most MPI implementations have had so-called “multi-rail” support for a long time (i.e., using multiple network interfaces […]

How many network links do you have for MPI traffic?

2 min read

If you’re a bargain basement HPC user, you might well scoff at the idea of having more than one network interface for your MPI traffic. “I’ve got (insert your favorite high bandwidth network name here)! That’s plenty to serve all my cores! Why would I need more than that?” I can think of (at least) […]

Open MPI and the MPI-3 MPI_T interface

3 min read

Open MPI recently revamped its entire run-time parameter system (a.k.a., “MCA parameter system”) as part of its implementation effort for the “MPI_T” interface from MPI-3. The MPI_T interface is a standardized interface designed for MPI tools, but can be used by regular MPI application programs, too. Specifically, MPI_T provides programatic access to two types of […]

Why MPI is Good for You (part 3)

2 min read

I’ve previously posted on “Why MPI is Good for You” (blog tag: why-mpi-is-good-for-you).  The short version is that it hides the typical application programmer from lots and lots of underlying network stuff; stuff that they really, really don’t want to be involved in. Here’s another case study… Cisco’s upcoming ultra-low latency MPI transport is implemented […]