Back in the ’90s, there was a huge bubble of activity about Java in academic circles. It was the new language that was going to take over the world. An immense amount of research was produced mapping classic computer science issues into Java.
Among the projects produced were several that tried to bring MPI to Java. That is, they added a set of Java bindings over existing C-based MPI implementations. However, many in the HPC crowd eschewed Java for compute- or communication-heavy applications because of performance overheads inherent to the Java language and runtime implementations.
Hence, the Java+MPI=HPC efforts didn’t get too much traction.
But even though the computer science Java bubble eventually ended, Java has become quite an important language in the enterprise. Java run-time environments, compilers, and programming models have steadily improved over the years. Java is now commonly used for many different types of compute-heavy enterprise applications.
Read More »
Tags: HPC, java, mpi
A while ago, Brock Palen tweeted me an MPI question: how does one send Standard Template Library (STL) C++ objects in MPI?
The problem that Brock is asking about is that STL objects tend to be variable size and type. The whole point of the STL is to create flexible, easy-to-use “containers” of arbitrary types. For example, STL lists allow you to create an arbitrary length list of a given type.
To cite a concrete example, let’s say that my application has an STL vector object named my_vector that contains a bunch of integers. What parameters do I pass to MPI_SEND to send this beast?
Read More »
Tags: HPC, mpi
Welcome to 2012! I’m finally about caught up from the Christmas holidays, last week’s travel to the MPI Forum, etc. It’s time to finally get my blogging back on.
Let’s start with a short one…
Rich Brueckner from InsideHPC interviewed me right before the Christmas break about the low Ethernet MPI latency demo that I gave at SC’11. I blogged about this stuff before, but in the slidecast that Rich posted, I provide a bit more detail about how this technology works.
Remember that this is Cisco’s 1st generation virtualized NIC; our 2nd generation is coming “soon,” and will have significantly lower MPI latency (I hate being fuzzy and not quoting the exact numbers, but the product is not yet released, so I can’t comment on it yet. I’ll post the numbers when the product is actually available).
Tags: HPC, Linux, mpi, VFIO
MPI_Bcast("Hi, this is Jeff Squyres. I'm not in the office this week. "
"I'll see your message when I return in 2012. Happy New Year!",
1, MPI_MESSAGE, MPI_COMM_WORLD);
MPI_Bcast("Beep.", 1, MPI_MESSAGE, MPI_COMM_WORLD);
MPI_RECV(your_message, 1, MPI_MESSAGE, MPI_ANY_RANK, MPI_ANY_TAG,
Tags: HPC, mpi
The upcoming January 2012 MPI Forum meeting is the last meeting to get new material into the MPI-3.0 specification.
Specifically, there are three steps to getting something into the MPI specification: a formal reading and two separate votes. Each of these three steps must happen at a separate meeting. This makes adding new material a long process… but that’s a good thing in terms of a standard. You want to be sure. You need a good amount of time of reflection and investigation before you standardize something for the next 10-20 years.
Of course, due to the deadline, we have a giant list of proposals up for a first reading in January (this is not including the 1st and 2nd votes also on the agenda). Here’s what’s on the docket so far — some are big, new things, while others are small clarifications to existing language: Read More »
Tags: HPC, mpi, MPI-3.0