We provide an overview of how to use the Java bindings included in Open MPI. The aim is to expose MPI functionality to Java programmers with minimal performance penalties.
I was on vacation last week, and had a nice April Fool’s blog post queued up to be posted at 8am US Eastern time on 1 April 2014.
It should have appeared whilst I was relaxing on a beach… but due to a bug in our WordPress installation, it didn’t. And I didn’t find out about the error until after I returned from vacation (long after April 1st).
This year, EuroMPI/ASIA 2014 will hold two workshops. Accepted workshop papers will be included in ACM’s ICPS conference proceedings of the EuroMPI/ASIA 2014.
Workshop information is available on the EuroMPI/ASIA 2014 web site, and details of their workshops are in their web respective sites. If you have any further questions, please contact associated workshop organizers listed in their workshop web sites.
We are looking forward to your submissions!
After a metric ton of work by the entire community, Open MPI has released version 1.7.5.
Among the zillions of minor updates and new enhancements are two major new features:
- MPI-3.0 conformance
- OpenSHMEM support (Linux only)
See this post on the Open MPI announcement list for more details.
As an HPC old-timer, I’m used to thinking of HPC networks as large layer-2 (L2) subnets. All HPC traffic (e.g., MPI traffic) is therefore designed to stay within a single L2 subnet.
The next layer up — L3 — is the “networking” layer in the OSI network model; it adds more abstractions than are available in L2. For example, IP switching and routing occurs at L3. Indeed, L3-based networks can be comprised of multiple subnets.
I’ve come to appreciate that, especially with modern high-speed networking gear, there is no reason for limiting HPC networks to L2.