Today’s guest post is written by Tanzima Islam, Post Doctoral Researcher at Lawrence Livermore Laboratory, and Kathryn Mohror and Martin Schulz, Computer Scientists at Lawrence Livermore Laboratory. The latest version of the MPI Standard, MPI
This question is inspired by the fact that the “count” parameter to MPI_SEND and MPI_RECV (and friends) is an “int” in C, which is typically a signed 4-byte integer, meaning that its largest positive value is 231, or about 2
Half-round-trip ping-pong latency may be the first metric that everyone looks at with MPI in HPC, but bandwidth is one of the next metrics examined. 40Gbps Ethernet has been available for switch-to-switch links for quite a while, and 40Gbps NICs are
Today’s guest blog post is from Oscar Vega-Gisbert and Dr. Jose Roman from the Department of Information Systems and Computing at the Universitat Politècnica de València, Spain. We provide an overview of how to use the Java bindings included in
The Open MPI project released version v1.8 last week. This is a major release that heralds the beginning of a new production-ready series, full MPI-3.0 support, and a new OpenSHMEM implementation. Open MPI is developed in a tick-tock fashion
I was on vacation last week, and had a nice April Fool’s blog post queued up to be posted at 8am US Eastern time on 1 April 2014. It should have appeared whilst I was relaxing on a beach… but due to a bug in our WordPress installation, it
Held in conjunction with EuroMPI/ASIA 2014 (see the associated call for papers), September 9-12, 2014. In-cooperation status with ACM and SIGHPC. This year, EuroMPI/ASIA 2014 will hold two workshops. Accepted workshop papers will be included in
After a metric ton of work by the entire community, Open MPI has released version 1.7.5. Among the zillions of minor updates and new enhancements are two major new features: MPI-3.0 conformance OpenSHMEM support (Linux only) See this post on the Open
As an HPC old-timer, I’m used to thinking of HPC networks as large layer-2 (L2) subnets. All HPC traffic (e.g., MPI traffic) is therefore designed to stay within a single L2 subnet. The next layer up — L3 — is the