Jeff Squyres

The MPI Guy

UCS Platform Software

Dr. Jeff Squyres is Cisco's representative to the MPI Forum standards body and is Cisco's core software developer in the open source Open MPI project. He has worked in the High Performance Computing (HPC) field since his early graduate-student days in the mid-1990's, and is a chapter author of the MPI-2 and MPI-3 standards.

Jeff received both a BS in Computer Engineering and a BA in English Literature from the University of Notre Dame in 1994; he received a MS in Computer Science and Engineering from Notre Dame two years later in 1996. After some active duty tours in the military, Jeff received his Ph.D. in Computer Science and Engineering from Notre Dame in 2004. Jeff then worked as a Post-Doctoral research associate at Indiana University, until he joined Cisco in 2006.

In Cisco, Jeff is part of the VIC group (Virtual Interface Card, Cisco's virtualized server NIC) in the larger UCS server group. He works in designing and writing systems-level software for optimized network IO in HPC and other high-performance types of applications. Jeff also represents Cisco to several open source software communities and the MPI Forum standards body.


Overlap of communication and computation (part 2)

In part 1 of this series, I discussed various peer-wise technologies and techniques that MPI implementations typically use for communication / computation overlap. MPI-3.0, published in 2012, forced a change in the overlap game. Specifically: most prior overlap work had been in the area of individual messages between a pair of peers.  These were very […]

Overlap of communication and computation (part 1)

I’ve mentioned computation / communication overlap before (e.g., here, here, and here). Various types of networks and NICs have long-since had some form of overlap.  Some had better quality overlap than others, from an HPC perspective. But with MPI-3, we’re really entering a new realm of overlap.  In this first of two blog entries, I’ll […]

HPC over UDP

A few months ago, I posted an entry entitled “HPC in L3“.  My only point for that entry was to remove the “HPC in L3? That’s a terrible idea!” knee-jerk reaction that us old-timer HPC types have. I mention this because we released a free software update a few days ago for the Cisco usNIC […]

Unsung heros: MPI run time environments

Most people immediately think of short message latency, or perhaps large message bandwidth when thinking about MPI. But have you ever thought about what your MPI implementation has to do before your application even calls MPI_INIT? Hint: it’s pretty crazy complex, from an engineering perspective. Think of it this way: operating systems natively provide a […]

Traffic in parallel

In my last entry, I gave a vehicles-driving-in-a-city analogy for network traffic. Let’s tie that analogy back to HPC and MPI.

Still more traffic

I periodically write about network traffic, and how general / datacenter network traffic analysis is related to MPI / HPC. In my last entry, I mentioned how network traffic has many characteristics in common with distributed computing. Routing decisions, for example, are made independently at each network switch. Consider if you were looking down at […]

Traffic (redux)

I’ve written about network traffic before (see this post and this post). It’s the subject of endless blog posts, help forums, and instructional guides across the internet. In a High Performance Computing (HPC) context, there are some fascinating aspects about network traffic that are fairly different than other types of network traffic.

BigMPI: You can haz moar counts!

Jeff Hammond has recently started developing the BigMPI library. BigMPI is intended to handle all the drudgery of sending and receiving large messages in MPI. In Jeff’s own words: [BigMPI is an] Interface to MPI for large messages, i.e. those where the count argument exceeds INT_MAX but is still less than SIZE_MAX. BigMPI is designed […]

Networks for MPI

It seems like we’ve gotten a rash of “how do I setup my new cluster for MPI?” questions on the Open MPI mailing list recently. I take this as a Very Good Thing, actually — it means more and more people are tinkering with and discovering the power of parallel computing, HPC, and MPI.