Pegasus IV Cluster |
|||||||||||
|
Parallel jobs
Pegasus IV cluster is set up for parallel programming using MPI (message passing interface). We have installed OpenMPI 1.10.1 for use with the Intel compiler suite. Compiling parallel code The compilers are invoked via wrapping scripts that specify the appropriate compiler switches and link to the parallel libraries. In order to compile a parallel program written in Fortran you should type:
If you want to change the executable name or optimize your code, mpifort will take all standard Intel Fortran compiler switches (see Serial jobs). (Note: Instead of mpifort, you can also use mpif77 and mpif90, but these commands are deprecated and maybe removed in future versions of OpenMPI.) Parallel C code is compiled using:
while parallel C++ code is compiled by invoking
As in the case of mpifort, both mpicc and mpiCC pass standard compiler switches to the original compilers, i.e., icc and icpc, respectively. Submitting a parallel job Similar to serial jobs, parallel jobs can be started interactively or submitted to the batch queue. To start a parallel job interactively, you can use "qsub -I" but you need to specify how many processors you wish to use. Typing
requests an interactive job with 4 processors. Once your interactive job has started, type
To submit a parallel batch job, you need an appropriate Torque script file. A miniminal example looks like:
This script is very similar to the Torque script for a serial job. However, the second line:
now requests a total of 8 processors (2 compute nodes with 4 processors each) and in the last line the executable is started via mpirun (which will use all processors allocated to the job by Torque). You can also combine resources of different types. For example,
requests a total of 20 processors (2 nodes of type quad with 4 processors each and 2 nodes of type hexa with 6 processors each). Note: If ppn is not specified in the resource request, a "node" is interpreted as a single processor rather than a full physical compute node. If both nodes and ppn are specified, "node" refers to compute nodes (machines) and ppn refers to processors. |