HPC

HPC schedulers: What is a “slot”?

3 min read

Today’s guest post comes from Ralph Castain, a principle engineer at Intel.  The bulk of this post is an email he sent explaining the concept of a “slot” in typical HPC schedulers. This is a little departure from the normal fare on this blog, but is still a critical concept to understand for running HPC […]

usNIC provider contributed to libfabric

1 min read

Today’s guest post is by Reese Faucette, one of my fellow usNIC team members here at Cisco. I’m pleased to announce that this past Friday, Cisco contributed a usNIC-based provider to libfabric, the new API in the works from OpenFabrics Interfaces Working Group. (Editor’s note: I’ve blogged about libfabric before) Yes, the road is littered with […]

MPI-3.1

1 min read

As you probably already know, the MPI-3.0 document was published in September of 2012. We even got a new logo for MPI-3.  Woo hoo! The MPI Forum has been busy working on both errata to MPI-3.0 (which will be collated and published as “MPI-3.1”) and all-new functionality for MPI-4.0. The current plan is to finalize […]

Overlap of communication and computation (part 1)

3 min read

I’ve mentioned computation / communication overlap before (e.g., here, here, and here). Various types of networks and NICs have long-since had some form of overlap.  Some had better quality overlap than others, from an HPC perspective. But with MPI-3, we’re really entering a new realm of overlap.  In this first of two blog entries, I’ll […]

HPC over UDP

2 min read

A few months ago, I posted an entry entitled “HPC in L3“.  My only point for that entry was to remove the “HPC in L3? That’s a terrible idea!” knee-jerk reaction that us old-timer HPC types have. I mention this because we released a free software update a few days ago for the Cisco usNIC […]

Unsung heros: MPI run time environments

3 min read

Most people immediately think of short message latency, or perhaps large message bandwidth when thinking about MPI. But have you ever thought about what your MPI implementation has to do before your application even calls MPI_INIT? Hint: it’s pretty crazy complex, from an engineering perspective. Think of it this way: operating systems natively provide a […]

Traffic in parallel

3 min read

In my last entry, I gave a vehicles-driving-in-a-city analogy for network traffic. Let’s tie that analogy back to HPC and MPI.

Still more traffic

1 min read

I periodically write about network traffic, and how general / datacenter network traffic analysis is related to MPI / HPC. In my last entry, I mentioned how network traffic has many characteristics in common with distributed computing. Routing decisions, for example, are made independently at each network switch. Consider if you were looking down at […]

Traffic (redux)

2 min read

I’ve written about network traffic before (see this post and this post). It’s the subject of endless blog posts, help forums, and instructional guides across the internet. In a High Performance Computing (HPC) context, there are some fascinating aspects about network traffic that are fairly different than other types of network traffic.