Over this past weekend, I had the motivation and time to overhaul Open MPI’s Fortran support for the better. Points worth noting:
- The “use mpi” module now includes all MPI subroutines. Strict type checking for everything!
- Open MPI now only uses a single Fortran compiler — there’s no more artificial division between “f77″ and “f90″
There’s still work to be done, of course (this is still off in a Mercurial bitbucket repo — not in the Open MPI main line SVN trunk yet), but the results of this weekend code sprint are significantly simpler Open MPI Fortran plumbing behind the scenes and a much, much better implementation of the MPI-2 “use mpi” Fortran bindings.
Read More »
Tags: Fortran, HPC, mpi, Open MPI
Fab Tillier (Microsoft MPI) and I recently proposed a set of user-level timers for MPI. The following slides are an example of what the interface could be:
Read More »
Tags: HPC, mpi, MPI-3.0
Google today announced its Summer of Code 2011 project winners. One of the winners was a project proposed by George Andreou based off this idea on the TCL wiki: create some kind of “native” hwloc binding for TCL.
Congratulations, George! A (brief) abstract of George’s winning project can be found here.
There’s more details involved that what is included in that abstract, of course, but I’m excited to see hwloc continue to spread and become genuinely useful to an ever-growing community. Read More »
Tags: HPC, hwloc, TCL
(today’s entry is guest-written by Fab Tillier, Microsoft MPI engineer extraordinaire)
When you send data in MPI, you specify how many items of a particular datatype you want to send in your call to an MPI send routine. Likewise, when you read data from a file, you specify how many datatype elements to read.
This “how many” value is referred to in MPI as a count parameter, and all of MPI’s functions define count parameters as integers: int in C, INTEGER in Fortran. This definition often limits users to 231 elements (i.e., roughly two billion elements) because int and INTEGER default to 32 bits on many of today’s platforms.
That may sound pretty big, but consider that a 231 byte file is not really that large by today’s standards — especially in HPC, where datasets can sometimes be terabytes in size. Reading a ~2 gigabyte file can take (far) less than a second. Read More »
Tags: HPC, mpi, MPI-3.0
Here’s some MPI quick-bites for this week:
- The MPI_MPROBE proposal was voted into MPI-3 a few weeks ago. Yay! (see this quick slideshow for an explanation of what MPI_MPROBE is)
- The Hardware Locality project just released hwloc v1.2. This new version now includes distance metrics between objects in the topology tree. W00t!
- Support for large counts looks good for getting passed into MPI-3; it’s up for its first formal reading at the upcoming Forum meeting.
- The same is true for the new MPI-3 one-sided stuff; it, too, is up for its first formal reading at the upcoming Forum meeting (they haven’t sent around their new PDF yet, but they will within a week or so — stay tuned here for updates).
- Likewise, the new Fortran-08 bindings are up for their first Forum presentation next meeting. We solved all of the outstanding Fortran issues with the F77 and F90 bindings… with the possible exception of non-blocking communication code movement. That one is still being debated with the Fortran language standardization body — it’s a complicated issue!
- Finally — the new MPI tools interface chapter is up for a first formal reading, too.
That’s a lot of first formal readings in one meeting…
Tags: HPC, hwloc, mpi, MPI Forum, MPI-3