I retweeted a tweet today that may seem strange for an MPI guy. I was echoing the sentiment that not everything in HPC revolves around MPI.
My rationale for retweeting is simple: I agree with the sentiment.
But I do want to point out that this statement has multiple levels to it.
- MPI is a tool. There are many (HPC) tools out there. Each tool has its own strengths and weaknesses, particularly for specific tasks. MPI is very good at message passing. It’s terrible at serving HTML. Ok, that’s a little extreme as an example, but you get the point — there’s plenty of HPC tasks that MPI is not necessarily the best tool for.
- MPI may be too low-level for what you’re trying to accomplish; higher-level abstractions might well be what you want. For example: maybe you just want to simulate N-body scenarios, or you just want to solve linear algebra equations, or you just want to simulate chemical reactions. Who cares what network transport and/or programming model is underneath? You should use a tool that presents a high-level abstraction that is relevant to the problem that you’re trying to solve whenever possible.
Indeed, there are many tools that present high-level abstractions that use MPI under the covers. And that’s great! Remember that one of MPI’s main target audience is middleware authors.
MPI implementations have spent a lot of time, effort, and money building wonderful software platforms. It’s perfect if MPI is then used as the lower communications layer for higher-level abstractions such that its users won’t know — or likely care — that MPI is used underneath. That’s kinda the whole point of layered software.
Don’t get me wrong — there are certainly higher-level abstractions for which MPI is not a good underlying tool (which I think is what the author’s point probably was ). But I did want to remind you, gentle reader, that not all higher-level abstractions need to advertise what their lower layers are built upon.