Next: 3.2 Running on parallel
Up: 3 Parallelism
Previous: 3 Parallelism
Contents
Two different parallelization paradigms are currently implemented
in QUANTUM ESPRESSO:
- Message-Passing (MPI). A copy of the executable runs
on each CPU; each copy lives in a different world, with its own
private set of data, and communicates with other executables only
via calls to MPI libraries. MPI parallelization requires compilation
for parallel execution, linking with MPI libraries, execution using
a launcher program (depending upon the specific machine). The number of CPUs used
is specified at run-time either as an option to the launcher or
by the batch queue system.
- OpenMP. A single executable spawn subprocesses
(threads) that perform in parallel specific tasks.
OpenMP can be implemented via compiler directives (explicit
OpenMP) or via multithreading libraries (library OpenMP).
Explicit OpenMP require compilation for OpenMP execution;
library OpenMP requires only linking to a multithreading
version of mathematical libraries, e.g.:
ESSLSMP, ACML_MP, MKL (the latter is natively multithreading).
The number of threads is specified at run-time in the environment
variable OMP_NUM_THREADS.
MPI is the well-established, general-purpose parallelization.
In QUANTUM ESPRESSO several parallelization levels, specified at run-time
via command-line options to the executable, are implemented
with MPI. This is your first choice for execution on a parallel
machine.
Library OpenMP is a low-effort parallelization suitable for
multicore CPUs. Its effectiveness relies upon the quality of
the multithreading libraries and the availability of
multithreading FFTs. If you are using MKL,1you may want to select FFTW3 (set CPPFLAGS=-D__FFTW3...
in make.sys) and to link with the MKL interface to FFTW3.
You will get a decent speedup (
25%) on two cores.
Explicit OpenMP is a recent addition, still under
development, devised to increase scalability on
large multicore parallel machines. Explicit OpenMP can be used
together with MPI and also together with library OpenMP. Beware
conflicts between the various kinds of parallelization!
If you don't know how to run MPI processes
and OpenMP threads in a controlled manner, forget about mixed
OpenMP-MPI parallelization.
Next: 3.2 Running on parallel
Up: 3 Parallelism
Previous: 3 Parallelism
Contents
paolo giannozzi
2015-03-08