Different versions of MPI don't affect how programs are written, but calls from batch files to mpdboot, mpiexec, and mpdallexit can vary depending on the type of MPI. For Intel MPI on CAC machines, start mpdboot so that it uses ssh.
node_cnt=$(wc -l $PBS_NODEFILE) mpdboot -n $node_cnt --verbose -r /usr/bin/ssh -f $PBS_NODEFILE
Here, the argument -n is the number of MPI daemons, or mpds, to start. Start one for each node in the batch job. The resource manager creates a $PBS_NODEFILE containing a list of machine names. For instance, a two-node job on the development queue has a $PBS_NODEFILE that reads:
The -np argument to the mpiexec command specifies the number of processes to start.
cores_per_node=$(grep processor /proc/cpuinfo|wc -l) process_cnt=$((cores_per_node*node_cnt)) mpiexec -np $process_cnt ./mytask
The command also takes an argument -ppn, which means processes per node. It has to come before the -np argument. This argument does not actually limit the number of processes that can run on a node but specifies the order in which processes are assigned to nodes. You can think of -ppn 2 as repeating each line of the $PBS_NODEFILE 2 times. Given two nodes, here are where each rank runs.
mpiexec -ppn 1 -np 2 ./mytask compute-3-48: 0 compute-3-47: 1 mpiexec -ppn 1 -np 16 ./mytask compute-3-48: 0,2,4,6,8,10,12,14 compute-3-47: 1,3,5,7,9.11,13,15 mpiexec -ppn 2 -np 16 ./mytask compute-3-48: 0,1,4,5,8,9,12,13 compute-3-47: 2,3,6,7,10,11,14,15 mpiexec -ppn 8 -np 16 ./mytask compute-3-48: 0-7 compute-3-47: 8-15
Calling mpiexec without the -ppn option is the same as -ppn 8 on an 8 core node.
usage: mpdboot --totalnum=<n_to_start> [--file=<hostsfile>] [--help] [--rsh=<rshcmd>] [--user=<user>] [--mpd=<mpdcmd>] [--loccons] [--remcons] [--shell] [--verbose] [-1] [--ncpus=<ncpus>] [--ifhn=<ifhn>] [--chkup] [--chkuponly] [--ordered] or, in short form, mpdboot -n n_to_start [-f <hostsfile>] [-h] [-r <rshcmd>] [-u <user>] [-m <mpdcmd>] -s -v [-1] [-c] [-o]
--totalnum specifies the total number of mpds to start; at least one mpd will be started locally, and others on the machines specified by the file argument; by default, only one mpd per host will be started even if the hostname occurs multiple times in the hosts file -1 means remove the restriction of starting only one mpd per machine; in this case, at most the first mpd on a host will have a console --file specifies the file of machines to start the rest of the mpds on; it defaults to mpd.hosts --mpd specifies the full path name of mpd on the remote hosts if it is not in your path --rsh specifies the name of the command used to start remote mpds; it defaults to rsh; an alternative is ssh --shell says that the Bourne shell is your default for rsh' --verbose shows the ssh attempts as they occur; it does not provide confirmation that the sshs were successful --loccons says you do not want a console available on local mpd(s) --remcons says you do not want consoles available on remote mpd(s) --ncpus indicates how many cpus you want to show for the local machine; others are listed in the hosts file --ifhn indicates the interface hostname to use for the local mpd; others may be specified in the hostsfile --chkup requests that mpdboot try to verify that the hosts in the host file are up before attempting start mpds on any of them; it just checks the number of hosts specified by -n --chkuponly requests that mpdboot try to verify that the hosts in the host file are up; it then terminates; it just checks the number of hosts specified by -n --ordered requests that mpdboot start all the mpd daemons in the exact order as specified in the host file
usage: mpiexec [-h or -help or --help] # get this message mpiexec -file filename # (or -f) filename contains XML job description mpiexec [global args] [local args] executable [args] where global args may be -l # line labels by MPI rank -bnr # MPICH1 compatibility mode -machinefile # file mapping procs to machines -nolocal # do not start on local system -perhost <n> # place consecutive <n> processes on each host -ppn <n> # stand for "process per node"; an alias to -perhost <n> -grr <n> # stand for "group round robin"; an alias to -perhost <n> -rr # involve "round robin" startup scheme -s <spec> # direct stdin to "all" or 1,2 or 2-4,6 -1 # override default of trying 1st proc locally -ifhn # network interface to use locally -tv # run procs under totalview (must be installed) -tvsu # totalview startup only -gdb # run procs under gdb -idb # run procs under idb -m # merge output lines (default with gdb) -a # means assign this alias to the job -ecfn # output_xml_exit_codes_filename -g<local arg name> # global version of local arg (below) -trace [<libraryname>] # trace the application using <libraryname> profiling library; # default is libVT.so -check [<libraryname>] # check the application using <libraryname> checking library; # default is libVTmc.so -tune # apply the tuned data produced by the MPI Tuner utility -noconf # do not use any mpiexec's configuration files and local args may be -n <n> or -np <n> # number of processes to start -wdir <dirname> # working directory to start in -umask <umask> # umask for remote process -path <dirname> # place to look for executables -host <hostname> # host to start on -soft <spec> # modifier of -n value -arch <arch> # arch type to start on (not implemented) -envall # pass all env vars in current environment -envnone # pass no env vars -envlist <list of env var names> # pass current values of these vars -env <name> <value> # pass this value of this env var mpiexec [global args] [local args] executable args : [local args] executable... mpiexec -gdba jobid # gdb-attach to existing jobid mpiexec -idba jobid # idb-attach to existing jobid mpiexec -configfile filename # filename contains cmd line segs as lines (See Reference Manual for more details) Examples: mpiexec -l -n 10 cpi 100 mpiexec -genv QPL_LICENSE 4705 -n 3 a.out mpiexec -n 1 -host foo master : -n 4 -host mysmp slave