Cisco Blogs


Cisco Blog > High Performance Computing Networking

MPI User Survey: Fun Results

March 19, 2010
at 12:00 pm PST

Here’s some fun results that we gleaned from the MPI user community survey…

Respondents were asked how much they valued each of the following in MPI on a scale from 1=most important to 5=least important (each item could be rated individually):

  • Runtime performance (e.g., latency, bandwidth, resource consumption, etc.)
  • Feature-rich API
  • Run-time reliability
  • Scalability to large numbers of MPI processes
  • Integration with other middleware, communication protocols, etc.

The first item in the list — runtime performance — may seem silly.  After all, this is high performance computing.  Many on the Forum assumed that everyone would rank runtime performance as the most important thing.  They were wrong (!).

  • Only about half of the respondents rated runtime performance as 1 (“most important”)
  • Another quarter rated runtime performance as 2 (“somewhat important”)
  • The remaining quarter either didn’t answer the question or rated runtime performance as 3 or higher.

Huh.  Fascinating.  I didn’t expect everyone to rate runtime performance as 1, but I guess I expected (much) more than half.  You can speculate on all kinds of meanings here — the data doesn’t specify the reasons why people picked their runtime performance ratings.  My personal guess is that this reflects that MPI is starting to be used in general inter-process communication scenarios; it may be percolating out of pure-HPC scenarios.

Here’s another fun result: respondents were asked to rank the following six proposed MPI-3 topics in order from most important to least important:

  • Nonblocking collectives
  • Revamped one-sided communications (compared to MPI-2.2)
  • MPI application control of fault tolerance
  • New Fortran bindings
  • “Hybrid” programming (MPI in conjunction with threads, OpenMP, etc.)
  • Standardized 3rd party MPI tool support

What’s really interesting is correlating the results of these two questions together.  Two distinct patterns that we noticed:

  1. Those who value nonblocking collectives highly also tend to value runtime performance.
  2. Those who value nonblocking collectives highly also tend to not value a feature-rich MPI API.

Taken together, our interpretation of these two points is that those who value nonblocking collectives see them as a performance enhancement — not yet-another-feature.  Or, put differently:

  • Users will assume that nonblocking communication implementations will perform well
  • Users will assume that nonblocking communication implementations will provide communication / computation overlap

As an MPI implementer, this is good information to know.  Smile

Comments Are Closed