Are you in the Northern California Bay Area and want to hear about Open MPI and/or Cisco’s usNIC technology next week? If so, you’re in luck! I’ll be speaking at Lawrence Berkeley Lab (LBL) next Thursday, November 7, 2013, at 2:30pm.
I’ve talked before about how getting high performance in MPI is all about offloading to dedicated hardware. You want to get software out of the way as soon as possible and let the underlying hardware progress the message passing at max speed.
In a previous post, I gave some (very) general requirements for how to setup / install an MPI installation. This is post #2 in the series: now that you’ve got a shiny new computational cluster, and you’ve got one or more MPI
The slides below are from my presentation at EuroMPI’13 about Open MPI’s flexible process affinity interface (in OMPI 1.7.2 and later). I described this system in a prior blog entries (one, two, three), but many people keep asking me
A few people asked me to post the slides that I just presented in the Cisco vendor session at EuroMPI’13. In short, I gave a brief overview of our servers and switches, and then some technical details of how we use SR-IOV in our usNIC, etc.
At this years’ 2013 High Performance Computing on Wall Street once again the greatest minds from the financial services industry gathered to discuss the latest technology trends that give financial firms a technology edge in accessing information in
I’m excited to announce that Cisco has just released usNIC as a feature of the UCS C-Series Rack Servers product line. usNIC is available since the release 1.5(2) of the Cisco UCS C-Series Integrated Management Controller.
One feature of the usNIC ultra-low latency Ethernet solution for the UCS Cisco VIC that we think is interesting is the fact that it is based on SR-IOV. What is SR-IOV, and why is it relevant in the HPC world? SR-IOV (Single Root I/O Virtualization) is
I often get questions from those who are just starting with MPI; they want to know common things such as: How to install / setup an MPI implementation How to compile their MPI applications How to run their MPI applications How to learn more about MPI