Running a parallel MPI job

From CAC Documentation wiki
Jump to navigation Jump to search

The default version of MPI is Intel MPI.

  1. Submitting a parallel job is not significantly different than submitting a serial job. First, you'll need to compile your MPI code. In this example, we'll use a simple 'HelloWorld' application. You can follow along from the MPI Hello World source code. Compile this (or any MPI program). We'll call ours hello.out:
    $ mpicc -o hello.out HelloWorld.c
  2. Write a batch script that calls mpiexec to execute the compiled MPI code.
     #PBS -A AcctNumber
     #PBS -l walltime=02:00,nodes=4
     #PBS -N mpiTest
     #PBS -j oe
     #PBS -q v4
     # Because jobs start in the HOME directory, move to where we submitted.
     cd "$PBS_O_WORKDIR" 
     #Count the number of nodes
     np=$(wc -l < $PBS_NODEFILE)
     #boot mpi on the nodes
     mpdboot -n $np --verbose -r /usr/bin/ssh -f $PBS_NODEFILE
     #now execute one process per node
     mpiexec -np $np $HOME/v4Test/hello.out
    There are two things worth noting in the above script. First, the "-j oe" directive joins the output and error streams into the same output file. Second, the "-l ppn:1" directive usually determines how many times each machine name appears in $PBS_NODEFILE... HOWEVER, be aware that at at CAC, this specification is ignored! In effect, it is always reset to 1, so that each machine name appears exactly once in the machine file. This is necessary to guarantee that you are granted exclusive access to the nodes in your batch job, and it helps the CAC accounting system to function properly.
    The "ppn" batch directive should be distinguished from a second use of "ppn", which is as an option to mpiexec (actually -ppn is an alias for -perhost). The -ppn or -perhost flag determines how many MPI processes are started up on each machine listed in the $PBS_HOSTFILE. At CAC, the standard $PATH takes you to the Intel mpiexec, and by default the Intel mpiexec assumes the value "-perhost 8" on v4. However, since this default value is implementation-dependent, we recommend that you always set your -ppn or -perhost flag explicitly every time you call mpiexec, as shown above. That way you ensure that you'll get exactly what you expect in case you ever use a different mpiexec. See below for more details on PPN.
  3. Once the script is ready, save to something reasonable (i.e. and submit the job using nsub:
    $ nsub