Error in installation of gromacs (thread-MPI)

GROMACS version: 2019.6
GROMACS modification: Yes/No
Here post your question

Dear gromacs users,
After using following command, I encountered with error:

cmake … -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_MPI=ON

– MPI is not compatible with thread-MPI. Disabling thread-MPI.
– Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS)
– Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS)
– Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND)
CMake Error at cmake/gmxManageMPI.cmake:181 (message):
MPI support requested, but no MPI compiler found. Either set the
C-compiler (CMAKE_C_COMPILER) to the MPI compiler (often called mpicc), or
set the variables reported missing for MPI_C above.
Call Stack (most recent call first):
CMakeLists.txt:460 (include)

How to fix it?

Best

I used yum install openmpi-devel

After following command:

– Found OpenMP_C: -fopenmp
– Found OpenMP_CXX: -fopenmp
– Found OpenMP: TRUE
– Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS)
– Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS)
– Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND)
CMake Error at cmake/gmxManageMPI.cmake:181 (message):
MPI support requested, but no MPI compiler found. Either set the
C-compiler (CMAKE_C_COMPILER) to the MPI compiler (often called mpicc), or
set the variables reported missing for MPI_C above.
Call Stack (most recent call first):
CMakeLists.txt:460 (include)

– Configuring incomplete, errors occurred!

Hi,

it seems the MPI compilers (or rather wrappers) for C and C++ cannot be found.
Can you do a quick which mpicc and/or which mpicxx to check whether MPI was properly installed?

Michele

Hi. Thanks.

which mpicc: /usr/lib64/openmpi/bin/mpicc

After using the following, problem was solved:

export PATH=$PATH:/usr/lib64/openmpi/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64/openmpi/bin

I continued installation.
But after command make check I encountered with:

The following tests FAILED:
2 - TestUtilsMpiUnitTests (Failed)
14 - MdrunUtilityMpiUnitTests (Failed)
22 - UtilityMpiUnitTests (Failed)
29 - GmxPreprocessTests (Timeout)
41 - regressiontests/simple (Failed)
42 - regressiontests/complex (Failed)
43 - regressiontests/kernel (Failed)
44 - regressiontests/freeenergy (Failed)
45 - regressiontests/rotation (Failed)
46 - regressiontests/essentialdynamics (Failed)
Errors while running CTest
make[3]: *** [CMakeFiles/run-ctest-nophys.dir/build.make:58: CMakeFiles/run-ctest-nophys] Error 8
make[2]: *** [CMakeFiles/Makefile2:1393: CMakeFiles/run-ctest-nophys.dir/all] Error 2
make[1]: *** [CMakeFiles/Makefile2:1173: CMakeFiles/check.dir/rule] Error 2
make: *** [Makefile:626: check] Error 2

How to resolve it?

Hi,

bear in mind that make check is not strictly necessary to install GROMACS: you can do sudo make install and see what happens (maybe run some simple parallel simulation to test if everything works properly).

I am not sure what is causing these errors though; maybe someone else has a clear answer. All I can think of is a version missmatch: is there any specific reason why you need GROMACS 2019? Have you tried with more recent versions?

Michele

Dear Michele,

Thank you. I used make install. The installation was done correctly. After mdrun command:

Running on 1 node with total 16 cores, 16 logical cores
Hardware detected on host cnlinux (the node of MPI rank 0):
CPU info:
Vendor: Intel
Brand: Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz

Using 1 MPI process
Using 16 OpenMP threads

Given the above result, which options of mdrun command are suitable to do mdrun in shorter time?

Best

Hi,

I really suggest you to have a look at Getting good performance from mdrun — GROMACS 2019 documentation .

It’s difficult to say a priori which mdrun options provide optimal performance since that could be both configuration and hardware dependent, so you will probably need to do some benchmarking.

Also: the number of MPI processes is not specified via mdrun, but rather via slurm (or similar) when launching the job. All I can say at a first glance is that you may want to parallelize on more processors (and probably more nodes) if the configuration is very large and possibly avoid using too many OMP theads per MPI process (you can control that with export OMP_NUM_THREADS=...).

Cheers,
Michele

Thanks for your guidance.

Hi Merry

You can find this instruction for MPI installation of gromacs on the HPC system

For regressiontest error, you can simply download the source file based on your gromacs version here Index of /regressiontests, then unpack it and add its path to the Cmake command to complete the installation procedure. Please check my instruction details in the GitHub link.