I’ve talked before about how getting high performance in MPI is all about offloading to dedicated hardware. You want to get software out of the way as soon as possible and let the underlying hardware progress the message passing at max speed.
But the funny thing about networking hardware: it tends to have limited resources. You might have incredibly awesome NICs in your HPC cluster, but they only have a finite (small) amount of resources such as RAM, queues, queue depth, descriptors (for queue entries), etc.
Read More »
Tags: HPC, mpi
In a previous post, I gave some (very) general requirements for how to setup / install an MPI installation.
This is post #2 in the series: now that you’ve got a shiny new computational cluster, and you’ve got one or more MPI implementations installed, I’ll talk about how to build, compile, and link applications that use MPI.
To be clear: MPI implementations are middleware — they do not do anything remarkable by themselves. MPI implementations are generally only useful when you have an application that uses the MPI middleware to do something interesting.
Read More »
Tags: HPC, mpi, MPI newbie
The slides below are from my presentation at EuroMPI’13 about Open MPI’s flexible process affinity interface (in OMPI 1.7.2 and later). I described this system in a prior blog entries (one, two, three), but many people keep asking me about it.
Josh Hursey from U. Wisconsin, LaCrosse, wrote this IMUDI paper about the interface (IMUDI is a sub-workshop of EuroMPI focusing on end-user issues) to get a little more publicity and awareness of this process affinity system. Specifically, we designed this affinity system so that we could get feedback from real end users about what is useful and what is not.
Read More »
Tags: HPC, mpi, NUNA, Open MPI, process affinity, processor affinity
A few people asked me to post the slides that I just presented in the Cisco vendor session at EuroMPI’13. In short, I gave a brief overview of our servers and switches, and then some technical details of how we use SR-IOV in our usNIC, etc.
Here’s the slides: Read More »
Tags: HPC, mpi, USNIC
At this years’ 2013 High Performance Computing on Wall Street once again the greatest minds from the financial services industry gathered to discuss the latest technology trends that give financial firms a technology edge in accessing information in real-time to better predict where markets are going and the best areas to invest.
Many vendors delivered their latest innovation data analytics software that can analyze market data in real-time, but without the right infrastructure, traders can be delayed in executing on that information. Trading smarter is the key underlying theme by which the fabric can provide greater transparency and enhance application delivery that impacts the business. Read More »
Tags: Cisco, Financial Services, High Performance Trading Fabric, HPC, low latency, programmable networks, SDN