Long time Open MPI mailing list contributor and Open Source Grid Engine (OGE, previously known as SGE) maintainer Rayson Ho has just opened up a blog about Open Grid Engine kinds of things.
In one of his first posts, he’s giving away a free Cisco Live! pass for the June 10-14, 2012 event. All you have to do is answer a “simple” MPI question (well, it might not be as simple as it looks ).
As of yesterday, no one had answered the question correctly, so it’s still up for grabs!
Tags: cisco live, HPC, mpi
Last March’s MPI Forum meeting was the last meeting to get a “formal reading” of proposals into MPI-3. Some were quite controversial. Some ended up being withdrawn before the next meeting.
This week’s Forum meeting in Japan saw the first vote (out of two) for each the surviving proposals from the March meeting (see the full voting results here). Some continued to be quite controversial. Some didn’t survive their first votes (doh!). Others narrowly survived.
Here’s a summary of some of the users-will-care-about-these proposals, and how they fared: Read More »
Tags: HPC, mpi, MPI-3
A few people have made remarks to me about the pair of CCI guest blog entries from Scott Atchley of Oak Ridge (entry 1, entry 2) indicating that they didn’t quite “get it”. So let me try to put Scott’s words in concrete terms…
CCI is an API that represents a unification of low-level “native” network APIs. Specifically: many network vendors are doing essentially the same things in their low-level “native” network APIs. In the HPC world, MPI hides all these different low-level APIs. But there are real-world non-HPC apps out there that need extreme network performance, and therefore write their own unification layers for verbs, portals, MX, …etc. Ick!
So why don’t we unify all these low-level native network APIs?
NOTE: This is quite similar to how we unified the high-level network APIs into MPI.
Two other key facts are important to realize here: Read More »
Tags: CCI, HPC
Marking the end of over 2 years of active development, the Open MPI project has released a new “stable” series of releases starting with v1.6.
Specifically, Open MPI maintains two concurrent release series:
- Odd number releases are “feature development” releases (e.g., 1.5.x). They’re considered to be stable and test, but not yet necessarily “mature” (i.e., have lots of real-world usage to shake out bugs). New features are added over the life of feature development releases.
- Even number releases are “super stable” releases (e.g., 1.6.x). After enough time, feature development releases transition into super stable releases — the new functionality has been vetted by enough real world usage to be considered stable enough for production sites.
Conceptually, it looks like this:
Tags: HPC, mpi
It’s finally out! The Architecture of Open Source Applications, Volume II, is now available in dead tree form (PDFs will be available for sale soon, I’m told).
Additionally, all content from the book will also be freely available on aosabook.org next week sometime (!).
But know this: all royalties from the sales of this book go to Amnesty International. So buy a copy; it’s for a good cause.
Both volumes 1 and 2 are excellent educational material for seeing how other well-known open source applications have been architected. What better way to learn than to see how successful, widely-used open source software packages were designed? Even better, after you read about each package, you can go look at the source code itself to further grok the issues.
Read More »
Tags: HPC, mpi, Open MPI, open source