Cisco Blogs
Share

EuroMPI 2015 Call for participation


June 29, 2015 - 6 Comments

EuroMPI 2015 is presented in cooperation with ACM and SIGHPC in Bordeaux France, 21st – 23rd September, 2015.

EuroMPI is the prime annual meeting for researchers, developers, and students in message-passing parallel computing with MPI and related paradigms.

Deadline of early registration is Sept 1st, 2015

The conference will feature 14 strong technical paper presentations, 3 invited talks, 3 tutorials and posters. The detailed conference information is being incrementally updated at the main conference web site.

TUTORIALS

  • Performance analysis for High Performance Systems, François Trahay, Telecom SudParis
  • Understanding and managing hardware affinities with Hardware Locality (hwloc), Brice Goglin, INRIA Bordeaux sud-Ouest
  • Insightful Automatic Performance Modeling, Alexandru Calotoiu, TU Darmstadt

INVITED TALKS

  • A new high performance network for HPC systems: Bull eXascale Interconnect (BXI), Jean-Pierre Panziera, Chief Technology Director for Extreme Computing at Atos
  • Computational Fluid Dynamics and High performance computing, Gabriel Staffelbach, Senior Researcher at CERFACS
  • Is your software ready for exascale? – How the next generation of performance tools can give you the answer, Prof. Felix Wolf, Department of Computer Science of TU Darmstadt

IMPORTANT DATES

  • Early Registration Deadline: September 1st, 2015
  • Tutorials: September 21st, 2015
  • Conference: September 22nd-23rd, 2015

 

 

In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.

6 Comments

  1. Thanks for your comments, John! :-)

  2. Hi Anonymous. No, there is no current proposal in front of the MPI Forum for a lambda-calculus-based type of model. Do you have some ideas that could turned into a proposal to be presented?

    • It would be wonderful (and useful) if "lambda-calculus-based" type model be used to for applications requiring sparse matrices in parallel.

  3. Hi Jeff, will there be an adoption of the functional (lambda-calculus) programming model for MPI 3.1?

  4. Yes, that sounds like a great idea -- I'll write about this soon.

  5. Hi Jeff, could you made a blog post about how the MPI standard specifies constants, or, e.g., the actual type of "magic" like MPI_DATATYPE? I'm in particular interested in what does the standard guarantees, and what problems this introduces for e.g. those trying to write wrappers to the MPI C interface for other languages like Rust, Python, Julia,... For example, one typically needs a wrapper for each implementation, which complicates things in typed languages like Rust or Haskell. Does the committee see this as a problem? Is there a plan to make it easier to use current MPI implementations from other languages? (E.g. having a single ABI at least per platform). I would love to hear your thoughts about this. Best regards