Cisco Blogs


Cisco Blog > High Performance Computing Networking

Traffic (redux)

July 28, 2014 at 10:44 am PST

I’ve written about network traffic before (see this post and this post). It’s the subject of endless blog posts, help forums, and instructional guides across the internet.

In a High Performance Computing (HPC) context, there are some fascinating aspects about network traffic that are fairly different than other types of network traffic.
Read More »

Tags: , , ,

Networks for MPI

May 24, 2014 at 7:14 am PST

It seems like we’ve gotten a rash of “how do I setup my new cluster for MPI?” questions on the Open MPI mailing list recently.

I take this as a Very Good Thing, actually — it means more and more people are tinkering with and discovering the power of parallel computing, HPC, and MPI.

Read More »

Tags: , ,

First public tools for the MPI_T interface in MPI-3.0

May 20, 2014 at 5:00 am PST

Today’s guest post is written by Tanzima Islam, Post Doctoral Researcher at Lawrence Livermore Laboratory, and Kathryn Mohror and Martin Schulz, Computer Scientists at Lawrence Livermore Laboratory.

MPI_T logoThe latest version of the MPI Standard, MPI 3.0, includes a new interface for tools: the MPI Tools Information Interface, or “MPI_T”.

MPI_T complements the existing MPI profiling interface, PMPI, and offers access to both internal performance information as well as runtime settings. It is based on the concept of typed variables that can be queried, read, and set through the MPI_T API.

Read More »

Tags: , ,

Can I MPI_SEND (and MPI_RECV) with a count larger than 2 billion?

May 17, 2014 at 5:13 am PST

This question is inspired by the fact that the “count” parameter to MPI_SEND and MPI_RECV (and friends) is an “int” in C, which is typically a signed 4-byte integer, meaning that its largest positive value is 231, or about 2 billion.

However, this is the wrong question.

The right question is: can MPI send and receive messages with more than 2 billion elements?

Read More »

Tags: , , ,

MPI over 40Gb Ethernet

May 10, 2014 at 3:29 am PST

Half-round-trip ping-pong latency may be the first metric that everyone looks at with MPI in HPC, but bandwidth is one of the next metrics examined.

40Gbps Ethernet has been available for switch-to-switch links for quite a while, and 40Gbps NICs are starting to make their way down to the host.

How does MPI perform with a 40Gbps NIC?

Read More »

Tags: , , ,