Welcome to 2012! I’m finally about caught up from the Christmas holidays, last week’s travel to the MPI Forum, etc. It’s time to finally get my blogging back on.
Let’s start with a short one…
Rich Brueckner from InsideHPC interviewed me right before the Christmas break about the low Ethernet MPI latency demo that I gave at SC’11. I blogged about this stuff before, but in the slidecast that Rich posted, I provide a bit more detail about how this technology works.
Remember that this is Cisco’s 1st generation virtualized NIC; our 2nd generation is coming “soon,” and will have significantly lower MPI latency (I hate being fuzzy and not quoting the exact numbers, but the product is not yet released, so I can’t comment on it yet. I’ll post the numbers when the product is actually available).
Tags: HPC, Linux, mpi, VFIO
Linux VFIO (Virtual Function IO) is an emerging technology that allows direct access to PCI devices from userspace. Although primarily designed as a hypervisor-bypass technology for virtualization uses, it can also be used in an HPC context.
Think of it this way: hypervisor bypass is somewhat similar to operating system (OS) bypass. And OS bypass is a characteristic sought in many HPC low-latency networks these days.
Drop by the Cisco SC’11 booth (#1317) where we’ll be showing a technology preview demo of Open MPI utilizing Linux VFIO over the Cisco “Palo” family of first-generation hardware virtualized NICs (specifically, the P81E PCI form factor). VIFO + hardware virtualized NICs allow benefits such as:
- Low HRT ping-pong latencies over Ethernet via direct access to L2 from userspace (4.88us)
- Hardware steerage of inbound and outbound traffic to individual MPI processes
Let’s dive into these technologies a bit and explain how they benefit MPI.
Read More »
Tags: HPC, Linux, sc11, VFIO