Cisco Blogs


Cisco Blog > High Performance Computing Networking

Traffic (redux)

July 28, 2014 at 10:44 am PST

I've written about network traffic before (see this post and this post). It's the subject of endless blog posts, help forums, and instructional guides across the internet.

In a High Performance Computing (HPC) context, there are some fascinating aspects about network traffic that are fairly different than other types of network traffic.
Read More »

Tags: , , ,

BigMPI: You can haz moar counts!

June 13, 2014 at 9:01 am PST

Grumpy cat hates small MPI countsJeff Hammond has recently started developing the BigMPI library.

BigMPI is intended to handle all the drudgery of sending and receiving large messages in MPI.

In Jeff's own words:

[BigMPI is an] Interface to MPI for large messages, i.e. those where the count argument exceeds INT_MAX but is still less than SIZE_MAX. BigMPI is designed for the common case where one has a 64b address space and is unable to do MPI communication on more than 231 elements despite having sufficient memory to allocate such buffers.

Read More »

Tags: , ,

Networks for MPI

May 24, 2014 at 7:14 am PST

It seems like we've gotten a rash of "how do I setup my new cluster for MPI?" questions on the Open MPI mailing list recently.

I take this as a Very Good Thing, actually -- it means more and more people are tinkering with and discovering the power of parallel computing, HPC, and MPI.

Read More »

Tags: , ,

First public tools for the MPI_T interface in MPI-3.0

May 20, 2014 at 5:00 am PST

Today's guest post is written by Tanzima Islam, Post Doctoral Researcher at Lawrence Livermore Laboratory, and Kathryn Mohror and Martin Schulz, Computer Scientists at Lawrence Livermore Laboratory.

MPI_T logoThe latest version of the MPI Standard, MPI 3.0, includes a new interface for tools: the MPI Tools Information Interface, or “MPI_T”.

MPI_T complements the existing MPI profiling interface, PMPI, and offers access to both internal performance information as well as runtime settings. It is based on the concept of typed variables that can be queried, read, and set through the MPI_T API.

Read More »

Tags: , ,

Can I MPI_SEND (and MPI_RECV) with a count larger than 2 billion?

May 17, 2014 at 5:13 am PST

This question is inspired by the fact that the "count" parameter to MPI_SEND and MPI_RECV (and friends) is an "int" in C, which is typically a signed 4-byte integer, meaning that its largest positive value is 231, or about 2 billion.

However, this is the wrong question.

The right question is: can MPI send and receive messages with more than 2 billion elements?

Read More »

Tags: , , ,