Jonathan Dursi recently posted a fairly controversial blog entry entitled “HPC is dying, and MPI is killing it.”
Some people immediately dismissed the blog post (and its followups) as trolling. Others praised Jonathan for bringing up the issues.
Brock Palen and I recently chatted with Jonathan about this blog entry on RCE-Cast.
Here’s my take on Jonathan’s writings:
- Jonathan posted two followup blog entries (this one and this one), which do a good job of further explaining his viewpoint and discussing feedback that he received.
- I don’t necessarily agree with all of Jonathan’s points.
- That being said, I think that many of his points are valid.
- More specifically: Jonathan’s post is an honest attempt at stimulating discussion (it is most certainly not trolling, as some have alleged). If you don’t agree with his points, great — let’s discuss and come up with a clear, crisply stated, reasoned set of alternate points.
- Therefore, I believe that these topics form an excellent foundation for stimulating conversation in the HPC community. Introspection is a Good Thing.
Truth be told, there’s lots of great things about the HPC community, but there’s also some not-great things. Things that could be improved. Indeed, why isn’t the HPC community drawing as many new people as other communities, such as the Big Data community?
I think that this has a lot to do with the “missing middle” that has been discussed many times — which can be at least partially characterized by the fact that HPC tends to be at the very top end of the computational spectrum (and therefore a somewhat small group). Some view that as a good thing — that HPC should really only be the very bleeding edge of research, and the very largest (national-scale) supercomputers.
I disagree. The rest of the world can benefit from HPC technologies, too — you shouldn’t need to be a die-hard HPC expert to enjoy the fruits of its labor.
Case in point: ISVs have done an excellent job of capitalizing on HPC technologies (e.g., MPI). They incorporate parallel technologies that are under the covers. The end user doesn’t know or care how the parallelization works; they just know that when they point the software that a cluster of servers, they get their results faster.
So what’s my point?
I’m certainly not saying that we need to “dumb down” MPI and/or general HPC approaches.
I think, loosely speaking, it is worthwhile to take a good look around MPI and the overall HPC community and ask ourselves: how can we be better?
Good question. Why don’t you start reasearch instead of complaining?
I think improvement is a community-wide process. It’s not a job for just a single person.
Although I certainly try to do my part by contributing to academic research and being a chief proponent of ease-of-use functionality in Open MPI, it is nowhere near enough. Improving the community, by definition, will require participation from a lot of people. The first step is awareness, which was started by Jonathan’s blog entry. Hopefully, Brock and my podcast helped in that area, too. The next step is discussion and introspection, and is something in which we all need to participate.
My fundamental issue with Dursi’s posts is that he says things about MPI that are plainly not true. I’m always happy to engage people in a vigorous debate of programming models, but I insist that we subscribe to Moynihan’s “Everyone is entitled to their own opinions, but they are not entitled to their own facts.”
I, too, didn’t agree with all of Jonathan’s points. But many of them are still good (did you read his followup blog posts? they did a good job of explaining additional points, including your [and my] feedback about there being lots of nice MPI-based libraries).
Comments are closed.