– MPI is not compatible with thread-MPI. Disabling thread-MPI.
– Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS)
– Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS)
– Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND)
CMake Error at cmake/gmxManageMPI.cmake:181 (message):
MPI support requested, but no MPI compiler found. Either set the
C-compiler (CMAKE_C_COMPILER) to the MPI compiler (often called mpicc), or
set the variables reported missing for MPI_C above.
Call Stack (most recent call first):
CMakeLists.txt:460 (include)
– Found OpenMP_C: -fopenmp
– Found OpenMP_CXX: -fopenmp
– Found OpenMP: TRUE
– Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS)
– Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS)
– Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND)
CMake Error at cmake/gmxManageMPI.cmake:181 (message):
MPI support requested, but no MPI compiler found. Either set the
C-compiler (CMAKE_C_COMPILER) to the MPI compiler (often called mpicc), or
set the variables reported missing for MPI_C above.
Call Stack (most recent call first):
CMakeLists.txt:460 (include)
it seems the MPI compilers (or rather wrappers) for C and C++ cannot be found.
Can you do a quick which mpicc and/or which mpicxx to check whether MPI was properly installed?
bear in mind that make check is not strictly necessary to install GROMACS: you can do sudo make install and see what happens (maybe run some simple parallel simulation to test if everything works properly).
I am not sure what is causing these errors though; maybe someone else has a clear answer. All I can think of is a version missmatch: is there any specific reason why you need GROMACS 2019? Have you tried with more recent versions?
Thank you. I used make install. The installation was done correctly. After mdrun command:
Running on 1 node with total 16 cores, 16 logical cores
Hardware detected on host cnlinux (the node of MPI rank 0):
CPU info:
Vendor: Intel
Brand: Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz
Using 1 MPI process
Using 16 OpenMP threads
Given the above result, which options of mdrun command are suitable to do mdrun in shorter time?
It’s difficult to say a priori which mdrun options provide optimal performance since that could be both configuration and hardware dependent, so you will probably need to do some benchmarking.
Also: the number of MPI processes is not specified via mdrun, but rather via slurm (or similar) when launching the job. All I can say at a first glance is that you may want to parallelize on more processors (and probably more nodes) if the configuration is very large and possibly avoid using too many OMP theads per MPI process (you can control that with export OMP_NUM_THREADS=...).
You can find this instruction for MPI installation of gromacs on the HPC system
For regressiontest error, you can simply download the source file based on your gromacs version here Index of /regressiontests, then unpack it and add its path to the Cmake command to complete the installation procedure. Please check my instruction details in the GitHub link.