It’s finally out! The Architecture of Open Source Applications, Volume II, is now available in dead tree form (PDFs will be available for sale soon, I’m told).
Additionally, all content from the book will also be freely available on aosabook.org next week sometime (!).
But know this: all royalties from the sales of this book go to Amnesty International. So buy a copy; it’s for a good cause.
Both volumes 1 and 2 are excellent educational material for seeing how other well-known open source applications have been architected. What better way to learn than to see how successful, widely-used open source software packages were designed? Even better, after you read about each package, you can go look at the source code itself to further grok the issues.
Read More »
Tags: HPC, mpi, Open MPI, open source
Today we feature a deep-dive guest post from Ralph Castain, Senior Architecture in the Advanced R&D group at Greenplum, an EMC company.
Jeff is lazy this week, so he asked that I provide some notes on the process binding options available in the Open MPI (OMPI) v1.5 release series.
First, though, a caveat. The binding options in the v1.5 series are pretty much the same as in the prior v1.4 series. However, future releases (beginning with the v1.7 series) will have significantly different options providing a broader array of controls. I won’t address those here, but will do so in a later post.
Read More »
Tags: HPC, hwloc, mpi, NUMA, Open MPI, process affinity, processor affinity
Let me tell you a reason why open source and open communities are great: information sharing.
Let me explain…
I am Cisco’s representative to the Open MPI project, a middleware implementation of the Message Passing Interface (MPI) standard that facilitates big number crunching and parallel programming. It’s a fairly large, complex code base: Ohloh says that there are 0ver 674,000 lines of code. Open MPI is portable to a wide variety of platforms and network types.
However, supporting all the things that MPI is suppose to support and providing the same experience on every platform and network can be quite challenging. For example, a user posted a problem to our mailing list the other day about a specific feature not working properly on OS X.
Read More »
Tags: HPC, mpi, MPICH2, Open MPI, open source
I’m sure most everyone has heard already, but the K supercomputer has been upgraded and now reaches over 10 petaflops. Wow!
10.51 petaflops, actually, so if you round up, you can say that they “turned it up to 11.” Ahem.
We’ll actually have Shinji Sumimoto from the K team speaking during the Open MPI BOF at SC’11. Rolf vandeVaart from NVIDIA will also be discussing their role in Open MPI during the BOF.
We have the 12:15-1:15pm timeslot on Wednesday (room TCC 303); come join us to hear about the present status and future plans for Open MPI.
Tags: HPC, Open MPI, Supercomputing
In my last post, I talked about why MPI wrapper compilers are Good for you. The short version is that it is faaar easier to use a wrapper compiler than to force users to figure out what compiler and linker flags the MPI implementation needs — because sometimes they need a lot of flags.
Hence, MPI wrappers are Good for you. They can save you a lot of pain.
That being said, they can also hurt portability, as one user noted on the Open MPI user’s mailing list recently.
Read More »
Tags: HPC, mpi, MPICH, Open MPI