Why MPI “wrapper” compilers are Good for you
An interesting thread on the Open MPI user’s mailing list came up the other day: a user wanted Open MPI’s “mpicc” wrapper compiler to accept the same command line options as MPICH’s “mpicc” wrapper. On the surface, this is a very reasonable request. After all, MPI is all about portability — so why not make the wrapper compilers be the same?
Unfortunately, this request exposes a can of worms and at least one unfortunate truth: the MPI API is portable, but other aspects of MPI are not, such as compiling/linking MPI applications, launching MPI jobs, etc.
Let’s first explore what wrapper compilers are, and why they’re good for you.
A little background: compiling and linking an MPI application requires adding at least one flag to the compiler/linker command line. Sometimes it requires adding a lot of flags.
“Wrapper compilers” for MPI implementations were created in the mid-90’s to solve this problem. A wrapper compiler transparently hides all the MPI-specific compiler/linker flags you need for you. Users compile/link their applications just like they didn’t have any additional flags requires. For example:
mpicc my_mpi_application.c -o my_mpi_application -O3
The “mpicc” wrapper compiler will add in the necessary compiler and linker flags and then pass your augmented command line down to the “real” compiler (e.g., gcc, icc, or whatever C compiler you’re using). On my OS X laptop, the above command line translates into:
gcc my_mpi_application.c -o my_mpi_application -O3 -I/Users/jsquyres/bogus/include -L/Users/jsquyres/bogus/lib -lmpi
Not too bad — just specifying a directory for the header files and library files, and linking to the MPI library.
On my Linux machine, however, it’s a little more involved:
gcc my_mpi_application -o my_mpi_application.c -O3 -I/home/jsquyres/bogus/include -pthread -L/home/jsquyres/bogus/lib -lmpi -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lm -lnuma -ldl -Wl,–export-dynamic -lrt -lnsl -lutil -lm -ldl
To be fair, on this Linux install, I chose to build Open MPI a different way than I did on my laptop: I used static libraries instead of shared libraries. This resulted in a significantly different set of flags in the wrapper compiler.
Did you catch that subtle point? The flags required to compile and link MPI applications can change depending on how you configure / build / install your MPI implementation.
It would be terrible to force users to figure these flags by themselves — especially if they have no idea how the IT administrators configured / built / installed their MPI implementation. Hence, wrapper compilers save you from all of that. No matter how your Open MPI was installed, “mpicc myapp.c -o myapp” will always work.
That’s kinda the point, right?
Of course, there are some legitimate, real-world use cases where you can’t use MPI’s wrapper compilers. For this reason, Open MPI’s wrapper compilers let you extract the flags by using the “–showme” option. It has several deriviatives, too, like “–showme:cppflags” and “–showme:ldflags” and “–showme:libs”, and so on, if you need to extract just certain types of flags.
These flags can be used in large Makefiles, for example, to pull out the flags MPI needs and then combine them with the flags required for other middleware.
Other MPI implementations usually provide similar functionality; check their documentation for details.
In my next blog post, I’ll talk about some of the portability issues that wrapper compilers create.Tags: