Cisco Blogs


Cisco Blog > High Performance Computing Networking

Still more traffic

August 2, 2014 at 5:00 am PST

I periodically write about network traffic, and how general / datacenter network traffic analysis is related to MPI / HPC.

In my last entry, I mentioned how network traffic has many characteristics in common with distributed computing. Routing decisions, for example, are made independently at each network switch.

Consider if you were looking down at a city from above. Look at all the cars driving around the city streets. It’s chaos: each car/truck/bus/etc. makes its own routing decisions. Each one is a different size. Each one potentially goes in a different direction. Each one continually merges and splits from other traffic.

Yet somehow it all works.

Read More »

Tags: , ,

Traffic (redux)

July 28, 2014 at 10:44 am PST

I’ve written about network traffic before (see this post and this post). It’s the subject of endless blog posts, help forums, and instructional guides across the internet.

In a High Performance Computing (HPC) context, there are some fascinating aspects about network traffic that are fairly different than other types of network traffic.
Read More »

Tags: , , ,

BigMPI: You can haz moar counts!

June 13, 2014 at 9:01 am PST

Grumpy cat hates small MPI countsJeff Hammond has recently started developing the BigMPI library.

BigMPI is intended to handle all the drudgery of sending and receiving large messages in MPI.

In Jeff’s own words:

[BigMPI is an] Interface to MPI for large messages, i.e. those where the count argument exceeds INT_MAX but is still less than SIZE_MAX. BigMPI is designed for the common case where one has a 64b address space and is unable to do MPI communication on more than 231 elements despite having sufficient memory to allocate such buffers.

Read More »

Tags: , ,

Networks for MPI

May 24, 2014 at 7:14 am PST

It seems like we’ve gotten a rash of “how do I setup my new cluster for MPI?” questions on the Open MPI mailing list recently.

I take this as a Very Good Thing, actually — it means more and more people are tinkering with and discovering the power of parallel computing, HPC, and MPI.

Read More »

Tags: , ,

First public tools for the MPI_T interface in MPI-3.0

May 20, 2014 at 5:00 am PST

Today’s guest post is written by Tanzima Islam, Post Doctoral Researcher at Lawrence Livermore Laboratory, and Kathryn Mohror and Martin Schulz, Computer Scientists at Lawrence Livermore Laboratory.

MPI_T logoThe latest version of the MPI Standard, MPI 3.0, includes a new interface for tools: the MPI Tools Information Interface, or “MPI_T”.

MPI_T complements the existing MPI profiling interface, PMPI, and offers access to both internal performance information as well as runtime settings. It is based on the concept of typed variables that can be queried, read, and set through the MPI_T API.

Read More »

Tags: , ,