Cisco Blogs


Cisco Blog > High Performance Computing Networking

Google Summer of Code Project: Hardware Locality and TCL

April 25, 2011 at 6:20 pm PST

Google today announced its Summer of Code 2011 project winners.   One of the winners was a project proposed by George Andreou based off this idea on the TCL wiki: create some kind of “native” hwloc binding for TCL.

Congratulations, George!  A (brief) abstract of George’s winning project can be found here.

There’s more details involved that what is included in that abstract, of course, but I’m excited to see hwloc continue to spread and become genuinely useful to an ever-growing community. Read More »

Tags: , ,

Can we count on MPI to handle large datasets?

April 22, 2011 at 2:25 pm PST

(today’s entry is guest-written by Fab Tillier, Microsoft MPI engineer extraordinaire)

When you send data in MPI, you specify how many items of a particular datatype you want to send in your call to an MPI send routine.  Likewise, when you read data from a file, you specify how many datatype elements to read.

This “how many” value is referred to in MPI as a count parameter, and all of MPI’s functions define count parameters as integers: int in C, INTEGER in Fortran.  This definition often limits users to 231 elements (i.e., roughly two billion elements) because int and INTEGER default to 32 bits on many of today’s platforms.

That may sound pretty big, but consider that a 231 byte file is not really that large by today’s standards — especially in HPC, where datasets can sometimes be terabytes in size.  Reading a ~2 gigabyte file can take (far) less than a second.  Read More »

Tags: , ,

What is MPI_MPROBE?

April 15, 2011 at 5:17 pm PST

Here’s some MPI quick-bites for this week:

  • The MPI_MPROBE proposal was voted into MPI-3 a few weeks ago.  Yay! (see this quick slideshow for an explanation of what MPI_MPROBE is)
  • The Hardware Locality project just released hwloc v1.2.  This new version now includes distance metrics between objects in the topology tree.  W00t!
  • Support for large counts looks good for getting passed into MPI-3; it’s up for its first formal reading at the upcoming Forum meeting.
  • The same is true for the new MPI-3 one-sided stuff; it, too, is up for its first formal reading at the upcoming Forum meeting (they haven’t sent around their new PDF yet, but they will within a week or so — stay tuned here for updates).
  • Likewise, the new Fortran-08 bindings are up for their first Forum presentation next meeting.  We solved all of the outstanding Fortran issues with the F77 and F90 bindings… with the possible exception of non-blocking communication code movement.  :-(  That one is still being debated with the Fortran language standardization body — it’s a complicated issue!
  • Finally — the new MPI tools interface chapter is up for a first formal reading, too.

WHEW!

That’s a lot of first formal readings in one meeting…

Tags: , , , ,

Euro MPI 2011 Call for Papers

April 5, 2011 at 8:53 am PST

Euro MPI 2011 has just issued their call for papers (please re-distribute!):

Santorini, Greece, September 18-21, 2011
BACKGROUND AND TOPICS

EuroMPI is the primary meeting where the users and developers of MPI and other message-passing programming environments can interact. The 18th European MPI Users’ Group Meeting will be a forum for the users and developers of MPI, but also welcome hybrid programing models that combine message passing with programming of modern architectures such as multi-core, or accelerators. Through the presentation of contributed papers, poster presentations and invited talks, attendees will have the opportunity to share ideas and experiences to contribute to the improvement and furthering of message-passing and related parallel programming paradigms.

Topics of interest for the meeting include, but are not limited to:

  • Algorithms using the message-passing paradigm
  • Applications in science and engineering based on message-passing
  • User experiences in programming heterogeneous systems using MPI
  • Tools and environments for programming heterogeneous systems using MPI
  • MPI implementation issues and improvements
  • Latest extensions to MPI
  • MPI for high-performance computing, clusters and grid environments
  • New message-passing and hybrid parallel programming paradigms
  • Interaction between message-passing software and hardware
  • Fault tolerance in message-passing programs
  • Performance evaluation of MPI applications
  • Tools and environments for MPI

See the full web site for more information.

Special RCE podcast: Fukushima reactor

March 26, 2011 at 8:00 am PST

Given the seriousness of issues surrounding the Fukushima, Japan reactors, Brock and I decided to reach out through our HPC contacts to find some experts to discuss the situation.  We found Drs. Kim Kearfott and Mike Hartman at the University of Michigan (Dr. Harman is one of Brock’s HPC users at UM); both are on the faculty of the nuclear engineering department at the University of Michigan.

Our conversation with the good Doctors provided a wealth of accurate, easy-to-understand information about what is — and what is not — concerning about Fukushima.

Most people forget that the “E” in “RCE” stands for engineering, so while this podcast topic is a bit outside our normal fare, it is actually within the original charter of the series.

Tags: ,