Cisco Blogs
Share

# Probabiliy of correctness

- July 12, 2010 - 0 Comments

Pop quiz, hotshot: what happens if you run this program with 32 processes on your favorite parallel resource?  (copy-n-pasting this code to compile and run it yourself is CHEATING!)

&nbsp; int buf, rank = MPI::COMM_WORLD.Get_rank();<br />&nbsp; if (0 == rank) {<br />&nbsp;&nbsp;&nbsp; for (int i = 1; i &lt; MPI::COMM_WORLD.Get_size(); ++i) {<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; MPI_Status status;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; MPI_Recv(&amp;buf, 1, MPI_INT, MPI_ANY_SOURCE, 123, <br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; MPI_COMM_WORLD, &amp;status);<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; buf = i * 2;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; MPI_Send(&amp;buf, 1, MPI_INT, status.MPI_SOURCE, 123,<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; MPI_COMM_WORLD);<br />&nbsp;&nbsp;&nbsp; }<br />&nbsp; } else {<br />&nbsp;&nbsp;&nbsp; MPI_Send(&amp;rank, 1, MPI_INT, 0, 123, MPI_COMM_WORLD);<br />&nbsp;&nbsp;&nbsp; MPI_Recv(&amp;buf, 1, MPI::INT, MPI_ANY_SOURCE, 123,<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; MPI_COMM_WORLD, MPI_STATUS_IGNORE);<br />&nbsp; }

The mix of C and C++ is for brevity here on the blog.  But yea; it does compile and is a valid MPI application.

If you said “that program’s non-deterministic!”, you’d be right.

Why?  Note that rank 0 does a receive from the wildcard MPI_ANY_SOURCE, but then does a fixed calculation based on the variable i — not the actual source rank.  Hence, the behavior depends on the order in which rank 0 receives from its peers.  And that’s non-deterministic.

…but is that a Bad Thing?

In some cases, yes.  If the logic in your code assumes an ordering and the consecutive message receipt sources are not in the order that you expect, then clearly that’s a Bad Thing.  Thar be monstars thar.

But in some cases, it’s not a bad thing.

“Hey, wait!” you say.  “You MPI implementors have always told me that wildcards are evil!  You’ve told us application developers to pre-post non-blocking receives in the order that we want and use MPI_TEST (and friends) to selectively complete those receives, potentially artificially effecting order.  What gives?”

Well, yes, pre-posting receives can be a (very) Good Thing.  For example, pre-posting receives can allow the MPI implementation to maximize communication/computation overlap without causing additional copies for unexpected messages.

But sometimes a little non-deterministic chaos is just what you need. For example, it may be wasteful to pre-post specific receives from thousands of MPI peer processes when just a few pre-posted receives from MPI_ANY_SOURCE would consume far fewer resources.  And sometimes your application doesn’t care about ordering — maybe you have a manager/worker kind of workflow where it doesn’t matter which specific worker does the work.  Or maybe your application can use the MPI_SOURCE value from the status on a wildcard receive to figure out what to do based on who sent it.

And so on.

Granted, measured non-determinism can make applications more difficult to debug.  And you probably want to get the same answer every time you run your application — without a high probability of correctness, this is all moot, right?  But don’t be quick to judge; sometimes some non-determinism can be just what the doctor ordered.