Cisco Blogs


Cisco Blog > High Performance Computing Networking

HPC schedulers: What is a “slot”?

October 22, 2014 at 9:48 am PST

Today’s guest post comes from Ralph Castain, a principle engineer at Intel.  The bulk of this post is an email he sent explaining the concept of a “slot” in typical HPC schedulers. This is a little departure from the normal fare on this blog, but is still a critical concept to understand for running HPC applications efficiently.  With his permission, I re-publish Ralph’s email here because it’s a great analogy to explain the “slot” concept, which is broadly applicable to HPC users.

The question of “what is a [scheduler] slot” when discussing schedulers came up yesterday and was an obvious source of confusion, so let me try to explain the concept using a simple model.

Suppose I own a fleet of cars at several locations around the country. I am in the business of providing rides for people. Each car has 5 seats in it.

In one location, my clientele doesn’t have much sense of personal space and is willing to be a little crowded. In that location, I sell tickets to share a car, and allow up to 10 people who are going in roughly the same direction to share a single vehicle (hey, they are willing to sit on each other’s lap!).

In another location, my clients aren¹t quite as “friendly” and really prefer to have their own seat. However, they are willing to share the car with others headed in the same direction, so I sell only as many tickets as I have seats -- in this case, up to 5 tickets for a given car.

In a third location, my clients tend to be a little on the large side -- when I have a large passenger, I find that everyone is happier if I don’t fill the backseat. So when a customer flags that they are a little larger than average, I only sell 4 tickets for that car -- i.e., I require that the middle seat in the back be empty so the passengers can spread out a bit. This may require that I schedule that larger client on a different (perhaps larger) car if I already have 4 people for one that would otherwise be available.

In all of the above locations, I will sell another ticket and allow a passenger to enter a car once someone is dropped off. So I try to keep my cars as full as possible by constantly adding a replacement customer when one leaves. However, I never allow more passengers in the car then what that location will tolerate -- if someone tries to give a “free” lift to a person at the side of the road, I block them from doing so. In addition, if someone calls and asks for 8 tickets, I will schedule them across multiple cars according to the local policy.

In yet another location, I have very exclusive customers -- they don’t want anyone in the car with them. In this case, I simply lease them the entire car for the requested duration. They are free to do whatever they want with the vehicle (including picking up as many passengers as they like), so long as they return it clean and in good working condition.

The concept of the “slot” in schedulers is based on that max payload I define for each location. The scheduler is selling “tickets” to the servers/nodes based on some limit set by the system admin, which is usually based on the needs and policies of the local installation. As the above illustration shows, the definition of the “max payload” for a node can vary by site and node, and the scheduler takes into account a variety of requirements when allocating slots to a user.

Once we have an allocation, we then have to assign seats to individual customers. This is the “mapping” policy. When I map processes “by slot,” what I mean is that I start with the first seat in the first car, and assign customers to seats in a sequential fashion, filling all the allocated seats in the first car before starting to fill the second one. This is best for a “chatty” group of customers, but can lead to one car being more heavily loaded than the others.

When I map processes “by node,” I assign the first customer to the first seat in the first car, I assign the next customer to the first allocated seat in the second car, continuing round-robin until all the customers have been assigned an allocated seat. This balances the load in the cars, but is very inefficient if the customers needed to have a conversation.

Obviously, there are lots and lots of ways for allocating and assigning seats within those cars… that’s a whole separate topic.  :-)

Tags: ,

usNIC provider contributed to libfabric

October 20, 2014 at 5:54 am PST

Today’s guest post is by Reese Faucette, one of my fellow usNIC team members here at Cisco.

I’m pleased to announce that this past Friday, Cisco contributed a usNIC-based provider to libfabric, the new API in the works from OpenFabrics Interfaces Working Group.

(Editor’s note: I’ve blogged about libfabric before)

Yes, the road is littered with the bodies of APIs that were great ideas at the time (or not), but that doesn’t change the fact neither Berkeley sockets nor Linux Verbs are really adequate as cross-vendor, high-performance programming APIs.

Read More »

Tags: , , ,

MPI-3.1

October 7, 2014 at 6:18 am PST

MPI 3 logoAs you probably already know, the MPI-3.0 document was published in September of 2012.

We even got a new logo for MPI-3.  Woo hoo!

The MPI Forum has been busy working on both errata to MPI-3.0 (which will be collated and published as “MPI-3.1″) and all-new functionality for MPI-4.0.

The current plan is to finalize all errata and outstanding issues for MPI-3.1 in our December 2014 meeting (i.e., in the post-Supercomputing lull).  This means that we can vote on the final MPI-3.1 document at the next MPI Forum meeting in March 2015.

MPI is sometimes criticized for being “slow” in development.  Why on earth would it take 2 years to formalize errata from the MPI-3.0 document into an MPI-3.1 document?

The answer is (at least) twofold:

  1. This stuff is really, really complicated.  What appears to be a trivial issue almost always turns out to have deeper implications that really need to be understood before proceeding.  This kind of deliberate thought and process simply takes time.
  2. MPI is a standard.  Publishing a new version of that standard has a very large impact; it decides the course of many vendors, researchers, and users.  Care must be taken to get that publication as correct as possible.  Perfection is unlikely — as scientists and engineers, we absolutely have to admit that — but we want to be as close to fully-correct as possible.

MPI-4 is still “in the works”.  Big New Things, such as endpoints and fault tolerant behavior is still under active development.  MPI-4 is still a ways off, so it’s a bit early to start making predictions about what will/will not be included.

Tags: , ,

Overlap of communication and computation (part 1)

September 16, 2014 at 5:00 am PST

I’ve mentioned computation / communication overlap before (e.g., here, here, and here).

Various types of networks and NICs have long-since had some form of overlap.  Some had better quality overlap than others, from an HPC perspective.

But with MPI-3, we’re really entering a new realm of overlap.  In this first of two blog entries, I’ll explain some of the various flavors of overlap and how they are beneficial to MPI/HPC-style applications.

Read More »

Tags: ,

HPC over UDP

September 12, 2014 at 9:52 am PST

A few months ago, I posted an entry entitled “HPC in L3“.  My only point for that entry was to remove the “HPC in L3? That’s a terrible idea!” knee-jerk reaction that us old-timer HPC types have.

I mention this because we released a free software update a few days ago for the Cisco usNIC product that enables usNIC traffic to flow across UDP (vs. raw L2 frames).  Woo hoo!

That’s right, sports fans: another free software update to make usNIC even better than ever.  Especially across 40Gb interfaces!

Read More »

Tags: , , , , ,