Avatar

Jeff Squyres

The MPI Guy

UCS Platform Software

Dr. Jeff Squyres is Cisco's representative to the MPI Forum standards body and is Cisco's core software developer in the open source Open MPI project. He has worked in the High Performance Computing (HPC) field since his early graduate-student days in the mid-1990's, and is a chapter author of the MPI-2 and MPI-3 standards.

Jeff received both a BS in Computer Engineering and a BA in English Literature from the University of Notre Dame in 1994; he received a MS in Computer Science and Engineering from Notre Dame two years later in 1996. After some active duty tours in the military, Jeff received his Ph.D. in Computer Science and Engineering from Notre Dame in 2004. Jeff then worked as a Post-Doctoral research associate at Indiana University, until he joined Cisco in 2006.

In Cisco, Jeff is part of the VIC group (Virtual Interface Card, Cisco's virtualized server NIC) in the larger UCS server group. He works in designing and writing systems-level software for optimized network IO in HPC and other high-performance types of applications. Jeff also represents Cisco to several open source software communities and the MPI Forum standards body.

Articles

The History and Development of the MPI standard

1 min read

Today’s guest posting comes from Jesper Larsson Träff; he’s Faculty of Informatics, Institute of Information Systems in the Research Group for Parallel Computing at the Vienna University of Technology (TU Wien). Have you ever wondered why MPI is designed the way that it is?  The slides below are from Jesper’s talk about the History and Development of […]

MPI Quiz

1 min read

A fun scenario was proposed in the MPI Forum today.  What do you think this code will do? MPI_Comm comm, save; MPI_Request req; MPI_Init(NULL, NULL); MPI_Comm_dup(MPI_COMM_WORLD, &comm); MPI_Comm_rank(comm, &rank); save = comm; MPI_Isend(smsg, 4194304, MPI_CHAR, rank, 123, comm, &req); MPI_Comm_free(&comm); MPI_Recv(rmsg, 4194304, MPI_CHAR, rank, 123, save, MPI_STATUS_IGNORE);

May 27, 2013

OPEN AT CISCO

Cisco’s Philosophy on Open Source

1 min read

Last weekend, I was fortunate enough to be able to attend the Midwest Open Source Software Conference (MOSSCon 2013).  I met some fascinating people, listened to some great talks, and learned a bunch of new things. All in all, a win. I also presented a talk on two things: The general open source philosophy at […]

Speaking about Open MPI / FOSS at Midwest Open Source Convention this weekend

1 min read

I’ve been a bit remiss about posting recently; it’s conference-paper-writing season, folks — sorry. But I thought I’d mention that I’ll be speaking at the Midwest Open Source Software Convention (MOSSCon) this weekend. I’ll be talking about my work in Open MPI, Hardware Locality (hwloc), and other open source projects, as well as Cisco’s role […]

New Addition to the Cisco MPI Team

1 min read

I’m very pleased to welcome a new member to the Cisco USNIC/MPI Team: Dave Goodell.  Welcome, Dave!  (today was his first day) Dave joins us from the MPICH team at Mathematics and Computer Science division at Argonne National Laboratory.

April 10, 2013

OPEN AT CISCO

Presenting Open MPI, USNIC, and Cisco open source at MOSSCon’13

1 min read

I was just recently informed that my talk was accepted at the Midwest Open Source Software Conference (MOSSCon).  w00t! MOSSCon will be held at the University of Louisville, in Louisville, Kentucky, USA, on May 18-19, 2013.  It’s being organized by people from the Kentucky Open Source Society (KYOSS) and other open source / maker-oriented groups […]

Latency Analogies (part 2)

2 min read

In a prior blog post, I talked about latency analogies.  I compared levels of latencies to your home, your neighborhood, a far-away neighborhood, and another city.  I talked about these localities in terms of communication. Let’s extend that analogy to talk about data locality.

Latency Analogies

1 min read

Multiple readers have told me that it is difficult for them to understand and/or visualize the effects of latency on their HPC applications, particularly in modern NUMA (non-uniform memory access) and NUNA (non-uniform network access) environments. Let’s breaks down the different levels of latency in a typical modern server and network computing environments.

Social media login no longer required for comments

1 min read

A number of you complained when blogs.cisco.com switched to requiring a social medial login to leave comments. It turns out that you were not alone. Industry-wide, it seems that many people do not want to associate their personal Facebook/Twitter/etc. logins with work-related social media (i.e., this effect was seen at more than just Cisco).  The […]

EuroMPI 2013: papers due soon!

1 min read

Consider this a public service announcement: don’t forget that EuroMPI 2013 papers are due soon! EuroMPI is the place to see where the documented standard of MPI hits reality, both in terms of implementations and applications.  Come talk to real implementors, real users, and hear about state-of-the art techniques and performance optimizations.

MPI for mobile devices (or not)

2 min read

Every once in a while, the idea pops up again: why not use all the world’s cell phones for parallel and/or distributed computations? There’s gazillions of these phones — think of the computing power! After all, an army of ants can defeat a war horse, right? Well… yes… and no.

MPI Forum: What’s Next?

Now that we’re just starting into the MPI-3.0 era, what’s next? The MPI Forum is still having active meetings.  What is left to do?  Isn’t MPI “done”? Nope.  MPI is an ever-changing standard to meet the needs of HPC.  And since HPC keeps changing, so does MPI.

Ain’t your father’s TCP

TCP?  Who cares about TCP in HPC? More and more people, actually.  With the commoditization of HPC, lots of newbie HPC users are intimidated by special, one-off, traditional HPC types of networks and opt for the simplicity and universality of Ethernet. And it turns out that TCP doesn’t suck nearly as much as most (HPC) […]

Modern GPU Integration in MPI

Today’s guest post is from Rolf vandeVaart, a Senior CUDA Engineer with NVIDIA. GPUs are becoming quite popular as accelerators in High Performance Computing clusters. For example, check out Titan; a recent entry into the Top 500 list from Oak Ridge Laboratories. Titan has 18,688 nodes (299,008 CPU cores) coupled with 18,688 NVIDIA Tesla K20 […]

Process and memory affinity: why do you care?

3 min read

I’ve written about NUMA effects and process affinity on this blog lots of times in the past.  It’s a complex topic that has a lot of real-world affects on your MPI and HPC applications.  If you’re not using processor and memory affinity, you’re likely experiencing performance degradation without even realizing it. In short: If you’re not booting […]

Message size: big or small?

3 min read

It’s the eternal question: should I send lots and lots of small messages, or should I glump multiple small messages into a single, bigger message? Unfortunately, the answer is: it depends.  There’s a lot of factors in play.

I CAN HAS MPI

2 min read

The Cisco and Microsoft joint Cross-Animal Technology Project, a well-established player in the field of multi-species collaborative initiatives, is pleased to introduce its next project: a revolution in High Performance Computing (HPC): LOLCODE language bindings for the Message Passing Interface (MPI). CATP believes that cats are natural predatory programmers.  Who better to take advantage of all […]

MPI and Java: redux

1 min read

In a prior blog entry, I discussed how we are resurrecting a Java interface for MPI in the upcoming v1.7 release of Open MPI. Some users have already experimented with this interface and found it lacking, in at least two ways: Creating datatypes of multi-dimensional arrays doesn’t work because of how Java handles them internally […]

MPI_REQUEST_FREE is Evil

2 min read

It was pointed out to me that in my last blog post (Don’t leak MPI_Requests), I failed to mention the MPI_REQUEST_FREE function. True enough — I did fail to mention it.  But I did so on purpose, because MPI_REQUEST_FREE is evil. Let me explain…

Don’t leak MPI_Requests

1 min read

With the Mayan apocalypse safely behind us, now we can now safely discuss MPI again. An MPI application developer came to me the other day with a potential bug in Open MPI: he noticed that Open MPI was consuming vast amounts of memory such that trying to allocate memory from his application failed.  Ouch! It turns out, […]

McMPI

3 min read

Today’s guest blog entry comes from Daniel Holmes, an Applications Developers at the EPCC.  I met Jeff at EuroMPI in September, and he has invited me to write a few words on my experience of developing an MPI library. My PhD involved building a message passing library using C#; not accessing an existing MPI library […]

EuroMPI 2013: CFP

1 min read

It’s that time of year again — time to start preparing for Euro MPI 2013! Next year, we’ll be heading to Madrid, Spain September 15-18.   Here’s a snipit from the call for papers: Topics of interest include, but are not limited to: MPI implementation issues and improvements Extensions to and shortcomings of MPI Tools […]

Cisco ultra low latency support for MPI

1 min read

My team demonstrated our new ultra-low latency Ethernet solution in the Cisco booth at SC this past week (it was so busy that I didn’t get to post this until it was all over!). The short version is that we have implemented operating system bypass and NIC hardware offload via the Linux OpenFabrics verbs API […]

MPICH 3.0 RC released

1 min read

The MPICH folks have released an RC candidate for MPICH 3.0: A new preview release of MPICH, 3.0rc1, is now available for download. The primary focus of this release is to provide full support for the MPI-3 standard.  Other smaller features including support for ARM v7 native atomics are also included.

MPI-3 standard available in hardcover

1 min read

The MPI-3.0 standard is now available in hardcover (it’s green!).  The book is available for cost by Dr. Rolf Rabenseifner at HLRS; no profit is being made by these sales.  Here’s an excerpt from Rolf’s original announcement: As a service (at costs) for users of the Message Passing Interface,  HLRS has printed the new Standard, […]

Cisco @SC2012

1 min read

Going to Salt Lake City for Supercomputing 2012 next week?  So are we! Be sure to drop by and see us in the Cisco booth (#2517).  I’ll be there, demonstrating and talking about our latest developments in ultra low latency Ethernet (hint: it includes 250ns port-to-port Ethernet switch latency and our latest MPI/OS-bypass technology on the […]