Cisco Blogs

Cisco Blog > High Performance Computing Networking

BigMPI: You can haz moar counts!

June 13, 2014 at 9:01 am PST

Grumpy cat hates small MPI countsJeff Hammond has recently started developing the BigMPI library.

BigMPI is intended to handle all the drudgery of sending and receiving large messages in MPI.

In Jeff’s own words:

[BigMPI is an] Interface to MPI for large messages, i.e. those where the count argument exceeds INT_MAX but is still less than SIZE_MAX. BigMPI is designed for the common case where one has a 64b address space and is unable to do MPI communication on more than 231 elements despite having sufficient memory to allocate such buffers.

Read More »

Tags: , ,

Networks for MPI

May 24, 2014 at 7:14 am PST

It seems like we’ve gotten a rash of “how do I setup my new cluster for MPI?” questions on the Open MPI mailing list recently.

I take this as a Very Good Thing, actually — it means more and more people are tinkering with and discovering the power of parallel computing, HPC, and MPI.

Read More »

Tags: , ,

First public tools for the MPI_T interface in MPI-3.0

May 20, 2014 at 5:00 am PST

Today’s guest post is written by Tanzima Islam, Post Doctoral Researcher at Lawrence Livermore Laboratory, and Kathryn Mohror and Martin Schulz, Computer Scientists at Lawrence Livermore Laboratory.

MPI_T logoThe latest version of the MPI Standard, MPI 3.0, includes a new interface for tools: the MPI Tools Information Interface, or “MPI_T”.

MPI_T complements the existing MPI profiling interface, PMPI, and offers access to both internal performance information as well as runtime settings. It is based on the concept of typed variables that can be queried, read, and set through the MPI_T API.

Read More »

Tags: , ,

Can I MPI_SEND (and MPI_RECV) with a count larger than 2 billion?

May 17, 2014 at 5:13 am PST

This question is inspired by the fact that the “count” parameter to MPI_SEND and MPI_RECV (and friends) is an “int” in C, which is typically a signed 4-byte integer, meaning that its largest positive value is 231, or about 2 billion.

However, this is the wrong question.

The right question is: can MPI send and receive messages with more than 2 billion elements?

Read More »

Tags: , , ,

MPI over 40Gb Ethernet

May 10, 2014 at 3:29 am PST

Half-round-trip ping-pong latency may be the first metric that everyone looks at with MPI in HPC, but bandwidth is one of the next metrics examined.

40Gbps Ethernet has been available for switch-to-switch links for quite a while, and 40Gbps NICs are starting to make their way down to the host.

How does MPI perform with a 40Gbps NIC?

Read More »

Tags: , , ,