Ethernet and Gigabit Ethernet
10 Gigabit Ethernet (10-GbE or 10-GigE) can be considered the default for MPICH stacks (both 1 and 2 standards). Some PACE clusters still use 1-GigE, or even regular ethernet, however. Please refer to cluster definitions to find out which interconnection network they are currently using. Here are three PBS script examples to use MPI-1 on PACE clusters:
MPI-2 standard adds more flexibility, performance and features, in comparison to MPI-1. Although it is backwards compatible with MPI-1, we have seen some MPI-1 codes that creates problems when compiled with MPI-2. PACE maintains three MPICH2 stacks with gnu, intel and pgi compilers. To use MPICH2, you need to setup 'mpdboot' first (only once). Here's how:
* Make sure to have a $HOME/.mpd.conf file with contents
echo "MPD_SECRETWORD=what_ever_word_you_want" > $HOME/.mpd.conf
Try not to use your gt password for this word!!! Also, make sure your .mpd.conf file is readable only by you:
chmod 600 $HOME/.mpd.conf
Then, you can use one of the following PBS samples to launch it:
Note the use of mpdboot inside these scripts, which significantly reduces the latency compared to MPI-1.
Some of the PACE clusters partially or fully utilize QDR infiniband (IB) fabric for faster communication. To use infiniband, codes need to be compiled and launched by one of the MVAPICH stacks we provide. Note that MPICH and MPICH2 stacks do not use infiniband. PACE currently supports several MVAPICH versions for gnu, intel and pgi compilers. Note that MVAPICH requires several changes in the PBS script file to enable 'rsh', therefore the example script given in the Job Submission page cannot be used as it is.
Here are three sample PBS scripts that demonstrates the use of MVAPICH for gnu, intel and pgi compilers:
InfiniBand or GigE on the PACE Community Cluster
When using the PACE Community Cluster, don't forget to avoid mixing both InfiniBand and GigE elements in your edit-compile-run cycle.
If you want to run your code over InfiniBand, you'll need to compile it with the appropriate compiler wrapper script and run it with the appropriate mpirun (via your PBS script and the paceib queue). The InfiniBand tools can be found here:
/usr/local/mvapich/bin/mpicc (or mpic++, mpiCC, mpicxx, mpif77, mpif90)
If you want to run your code over GigE, you'll need to compile it with the appropriate compiler wrapper script and run it with the appropriate mpirun (via your PBS script and the appropriate non-IB queue, e.g. the pace-cns queue --more queues coming soon). The GigE tools can be found here:
/usr/local/mpich2-intel/bin/mpicc (or mpiCC, mpicxx, mpif77, mpif90)
or if you have problems with the mpi-intel tools, you can use the standard MPICH tools found here:
/usr/local/mpich2/bin/mpicc (or mpiCC, mpicxx, mpif77, mpif90)