Cisco Blogs

Cisco Blog > High Performance Computing Networking

Is your MPI IPv6-ready?

Here’s a poll for readers: is your MPI IPv6-ready?

Many of you may not be using IP-based MPI network transports, but as HPC is becoming more and more commoditized, IP-based MPI implementations may actually start gaining in importance.  Not to ultra-high-performing systems, of course.  But you’d be surprised how many 4-, 8-, and 16-node Ethernet-based clusters are sold these days… particularly as core counts are increasing — a 16-node Westmere cluster is quite powerful!

Owners of such systems are typically running ISV-based MPI applications, or other “canned” parallel software.  Most of them don’t use InfiniBand or other high-speed interconnect — they just use good old Ethernet with TCP as the underlying transport for their MPI.

Sure, you ultra-high-performers out there may scoff at such a setup, but it’s pretty darn common these days.

Indeed, this class of customers — who clearly aren’t in the Top 500; let’s call them the “Bottom 500,000″ — just want to plug-n-play.  They don’t want to tweak, tune, or fiddle.  They just want to run their apps outo-of-the-box and enjoy some level of speedup over running on a single machine (probably directly relational to how much they paid for their cluster).

These are enterprise customers.

They use organization-wide resources for maintenance and support.  They call their IT department to setup the cluster for them.  And central IT doesn’t like one-off solutions; they like centrally managed, supported, and as-homogeneous-as-possible solutions.

With all the press recently about running out of IPv4 address blocks, how long will it be before organizations start using IPv6 internally?  It may be soon; it may be years away.  But that day is coming. And when that happens, central IT may want HPC clusters to use IPv6, too.

Is your MPI ready?

(yes, I know that there’s oodles of other HPC-related software that will need to be IPv6-ready, too, but this blog is about MPI :-)  )

It would be interesting to hear if anyone is using MPI over IPv6 for read production runs (and why).

Tags: ,

Comments Are Closed