Cisco Blogs
Voice Search is currently unavailable
Powered by Google Web Speech API
We didn't hear that. Try again.
When autocomplete results are available use up and down arrows to review and enter to select
Avatar

Jeff Squyres

The MPI Guy

UCS Platform Software

Dr. Jeff Squyres is Cisco's representative to the MPI Forum standards body and is Cisco's core software developer in the open source Open MPI project. He has worked in the High Performance Computing (HPC) field since his early graduate-student days in the mid-1990's, and is a chapter author of the MPI-2 and MPI-3 standards.

Jeff received both a BS in Computer Engineering and a BA in English Literature from the University of Notre Dame in 1994; he received a MS in Computer Science and Engineering from Notre Dame two years later in 1996. After some active duty tours in the military, Jeff received his Ph.D. in Computer Science and Engineering from Notre Dame in 2004. Jeff then worked as a Post-Doctoral research associate at Indiana University, until he joined Cisco in 2006.

In Cisco, Jeff is part of the VIC group (Virtual Interface Card, Cisco's virtualized server NIC) in the larger UCS server group. He works in designing and writing systems-level software for optimized network IO in HPC and other high-performance types of applications. Jeff also represents Cisco to several open source software communities and the MPI Forum standards body.

Articles

Overlap of communication and computation (part 2)

2 min read

In part 1 of this series, I discussed various peer-wise technologies and techniques that MPI implementations typically use for communication / computation overlap. MPI-3.0, published in 2012, forced a change in the overlap game. Specifically: most prior overlap work had been in the area of individual messages between a pair of peers.  These were very […]

Overlap of communication and computation (part 1)

3 min read

I’ve mentioned computation / communication overlap before (e.g., here, here, and here). Various types of networks and NICs have long-since had some form of overlap.  Some had better quality overlap than others, from an HPC perspective. But with MPI-3, we’re really entering a new realm of overlap.  In this first of two blog entries, I’ll […]

HPC over UDP

2 min read

A few months ago, I posted an entry entitled “HPC in L3“.  My only point for that entry was to remove the “HPC in L3? That’s a terrible idea!” knee-jerk reaction that us old-timer HPC types have. I mention this because we released a free software update a few days ago for the Cisco usNIC […]

Unsung heros: MPI run time environments

3 min read

Most people immediately think of short message latency, or perhaps large message bandwidth when thinking about MPI. But have you ever thought about what your MPI implementation has to do before your application even calls MPI_INIT? Hint: it’s pretty crazy complex, from an engineering perspective. Think of it this way: operating systems natively provide a […]

Traffic in parallel

3 min read

In my last entry, I gave a vehicles-driving-in-a-city analogy for network traffic. Let’s tie that analogy back to HPC and MPI.

Still more traffic

1 min read

I periodically write about network traffic, and how general / datacenter network traffic analysis is related to MPI / HPC. In my last entry, I mentioned how network traffic has many characteristics in common with distributed computing. Routing decisions, for example, are made independently at each network switch. Consider if you were looking down at […]

Traffic (redux)

2 min read

I’ve written about network traffic before (see this post and this post). It’s the subject of endless blog posts, help forums, and instructional guides across the internet. In a High Performance Computing (HPC) context, there are some fascinating aspects about network traffic that are fairly different than other types of network traffic.

BigMPI: You can haz moar counts!

2 min read

Jeff Hammond has recently started developing the BigMPI library. BigMPI is intended to handle all the drudgery of sending and receiving large messages in MPI. In Jeff’s own words: [BigMPI is an] Interface to MPI for large messages, i.e. those where the count argument exceeds INT_MAX but is still less than SIZE_MAX. BigMPI is designed […]

First public tools for the MPI_T interface in MPI-3.0

2 min read

Today’s guest post is written by Tanzima Islam, Post Doctoral Researcher at Lawrence Livermore Laboratory, and Kathryn Mohror and Martin Schulz, Computer Scientists at Lawrence Livermore Laboratory. The latest version of the MPI Standard, MPI 3.0, includes a new interface for tools: the MPI Tools Information Interface, or “MPI_T”. MPI_T complements the existing MPI profiling […]