From @softtalkblog, I was recently directed to an article about the Multicore Communication API (MCAPI) and MPI. Interesting stuff.
The main sentiments expressed in the article seem quite reasonable:
- MCAPI plays better in the embedded space than MPI (that’s what MCAPI was designed for, after all). Simply put: MPI is too feature-rich (read: big) for embedded environments, reflecting the different design goals of MCAPI vs. MPI.
- MCAPI + MPI might be a useful combination. The article cites a few examples of using MCAPI to wrap MPI messages. Indeed, I agree that MCAPI seems like it may be a useful transport in some environments.
One thing that puzzled me about the article, however, is that it states that MPI is terrible at moving messages around within a single server.
Huh. That’s news to me…
Read More »
Tags: HPC, MCAPI, mpi, Multicore Association
Let me tell you a reason why open source and open communities are great: information sharing.
Let me explain…
I am Cisco’s representative to the Open MPI project, a middleware implementation of the Message Passing Interface (MPI) standard that facilitates big number crunching and parallel programming. It’s a fairly large, complex code base: Ohloh says that there are 0ver 674,000 lines of code. Open MPI is portable to a wide variety of platforms and network types.
However, supporting all the things that MPI is suppose to support and providing the same experience on every platform and network can be quite challenging. For example, a user posted a problem to our mailing list the other day about a specific feature not working properly on OS X.
Read More »
Tags: HPC, mpi, MPICH2, Open MPI, open source
As usual, I’m exhausted — in a good way — at the end of an SC week. Whew!
Thanks to all who came to see my demo (showing 5.17us NetPIPE MPI latency over Ethernet via Linux VFIO and Cisco’s “Palo” NIC — no, that’s not iWARP and it’s not IBoIP a.k.a. RoCE — see my prior post for a little more info), and thanks to all who came to the Open MPI BOF. I counted about 100 people at the BOF. The BOF slides are available, if you missed the actual event.
Brock and I did a [probably incredibly embarrassing] short video spot with Rich Brueckner at the end of the show (another in the RCE-Cast <--> InsightHPC crossover series). The convention announcer guy was literally saying “The show is over; please leave” over the PA while we were recording. Whenever Rich gets to posting the video, I think you’ll see why I usually stick to writing. :-)
Read More »
Tags: HPC, sc11
Linux VFIO (Virtual Function IO) is an emerging technology that allows direct access to PCI devices from userspace. Although primarily designed as a hypervisor-bypass technology for virtualization uses, it can also be used in an HPC context.
Think of it this way: hypervisor bypass is somewhat similar to operating system (OS) bypass. And OS bypass is a characteristic sought in many HPC low-latency networks these days.
Drop by the Cisco SC’11 booth (#1317) where we’ll be showing a technology preview demo of Open MPI utilizing Linux VFIO over the Cisco “Palo” family of first-generation hardware virtualized NICs (specifically, the P81E PCI form factor). VIFO + hardware virtualized NICs allow benefits such as:
- Low HRT ping-pong latencies over Ethernet via direct access to L2 from userspace (4.88us)
- Hardware steerage of inbound and outbound traffic to individual MPI processes
Let’s dive into these technologies a bit and explain how they benefit MPI.
Read More »
Tags: HPC, Linux, sc11, VFIO
I’m sure most everyone has heard already, but the K supercomputer has been upgraded and now reaches over 10 petaflops. Wow!
10.51 petaflops, actually, so if you round up, you can say that they “turned it up to 11.” Ahem.
We’ll actually have Shinji Sumimoto from the K team speaking during the Open MPI BOF at SC’11. Rolf vandeVaart from NVIDIA will also be discussing their role in Open MPI during the BOF.
We have the 12:15-1:15pm timeslot on Wednesday (room TCC 303); come join us to hear about the present status and future plans for Open MPI.
Tags: HPC, Open MPI, Supercomputing