Cisco Logo


High Performance Computing Networking

Today’s the day.

Today marks 10 years since the first commit in the original Open MPI CVS source code repository (which was later converted to Subversion):

$ svn log -r 1 http://svn.open-mpi.org/svn/ompi
------------------------------------------------------------
r1 | jsquyres | 2003-11-22 11:36:58 -0500 (Sat, 22 Nov 2003)
First commit
------------------------------------------------------------

As I look right now, I see that we’re up to r29729.

Wow!

The Ohloh.net site gives detailed statistics from the Open MPI Subversion history.

Before I made that first commit ten years ago, we had had many, many hours of technical and logistical discussions between the LAM/MPI, LA-MPI, and FT-MPI developer teams.  Working together seemed like a good idea, but none of us really had any idea at all whether the whole thing would work as a combined project (PAC-X MPI joined us shortly after we started).

The idea was fairly radical: merge four existing MPI implementations; take the best ideas from each of them to start both a whole new code base, and a new community of MPI implementors.  This involved taking software developers with different backgrounds, different organizational biases, and different goals and making them all work towards a common goal.

As history has shown, not only did this idea work, it worked really, amazingly, astoundingly well.

Open MPI both as a software project and as a community is alive, kicking, and more awesome than ever.  It’s been a great ride; I’ve been privileged to work with oodles of waaaay-smarter-than-me MPI implementors, MPI application developers, and system administrators over the past 10 years.

I look forward to the next 10.

Long live Open MPI!

In an effort to keep conversations fresh, Cisco Blogs closes comments after 90 days. Please visit the Cisco Blogs hub page for the latest content.

2 Comments.


  1. Hello sir,

    My name is Nazuan. Currently I’am working on final year project for my Bachelor Degree. My project is related with setup Cluster Grid in IPv4 and IPv6 using MPI. During my information gathering, I had found your blog. Until now, my project is working perfectly in IPv4 but there is problem when I want to use IPv6.

    I’am using ubuntu 12.04 server as frontend cluster grid and ubuntu 12.04 desktop as resource client. ping with IPv6, network file system and SSH passwordless is working fine. If I running the code in single machine is work fine also but the problem when I want to add resource client. Hopefully that sir could help me. I very appreciate your help.

    Thank you sir.

    Sincerely;
    Mohd Nazuan

       0 likes

  2. If you’re having a problem with IPv6 and Open MPI, you should check out http://www.open-mpi.org/community/help/ and send a mail to the user’s mailing list with your issue (e.g., I note that you haven’t said *what* your issue is, other than “it doesn’t work” — you might want to be a bit more specific when you send to the users list).

       0 likes

  1. Return to Countries/Regions
  2. Return to Home
  1. All High Performance Computing Networking
  2. All Security
  3. Return to Home