Cisco Logo


High Performance Computing Networking

After some further thought, I do believe that I was too quick to say that MPI is not a good fit for the embedded / RT space.

Yes, MPI is “large” (hundreds of functions with lots of bells and whistles).  Yes, mainstream MPI is not primarily targeted towards RT environments.

But this does not mean that there have not been successful forays of MPI into this space.  Two obvious ones jump to mind:

Of course, it’s easy for me to say “MPI is the solution!” since that’s my obvious bias.  :-)

It would be interesting to see a well-balanced, objective compare/contrast of technologies like MPI, MCAPI, and other embedded communication APIs that can bridge traditional “macro” processors (Intel, AMD, etc.) with embedded processors.

In an effort to keep conversations fresh, Cisco Blogs closes comments after 90 days. Please visit the Cisco Blogs hub page for the latest content.

2 Comments.


  1. MPI 1.0 appeared in 1994, when the faster supercomputer in the world (Intel Paragon) ran at less than 100 MHz and had 16 MB of memory per node. As MPI was designed to run on systems with far less processing power than even the most mediocre embedded processor of today, the argument that MPI is not well-suited for modern embedded systems is a non-starter.

    It is entirely possible that the embedded software community is too ignorant about MPI to use it properly, but this is hardly the fault of the MPI community :-)

       0 likes

    • Fair point. I don’t know enough about the current state of the embedded market (as is probably obvious by my recent ignorant posts :-) ), but what you say makes sense.

         0 likes

  1. Return to Countries/Regions
  2. Return to Home
  1. All High Performance Computing Networking
  2. Return to Home