Which OS X version would you recommend for compiling GROMACS with OpenMPI (and avoid seg. fault)

GROMACS version: 2020.3
GROMACS modification: No
Dear All,
Hi,

I could get a working MDrun of GROMACS 2020.3 on High Sierra 10.13. Still, I had to set -ntmpi to 1 to avoid getting fatal errors. It worked fine but slower that what the system is actually capable of as the GPU and only one MPI thread worked together. However, segmentation faults occur on Catalina.

So basically, on 10.13 GROMACS could not be compiled with multi-threading. And unfortunately the Catalina update makes GROMACS mdrun completely unusable due to segmentation fault. I came across a response suggesting setting env variable MACOSX_DEPLOYMENT_TARGET=10.14. (Should i just type set MACOSX_DEPLOYMENT_TARGET=10.14 for that?).

I understand this is an OS X fault (I guess). However, If there is an older MacOS version (e.g., 10.11, 10.12 on which GROMACS 2020.3 runs without openMPI or seg fault issues. Thanks in advance for your valuable help.

System: Macbook Pro Retina 15’, Intel core i7 quad-core 2.5 GHz, Nvidia GT750M, 16Gb RAM (the same problem occurs on: Imac 5k 2017, Core i7 quad-core, AMD Radeon Pro 580, 16 Gb RAM)

Best Regards,

Apparently, In Macbook there is no benefit of using MPI parallelization over Thread parallelization as it is a shared memory architecture. Try, GCC/Intel compiler to enable OpenMP parallelization.

1 Like

Dear Dr. Masrul,
Kindly thank you for your response.
If you could please let me know which version of the MacOS/OS X would be the most trouble-free one, that would be great.

The commands I use for compiling follow:
mkdir build
cd build
cmake … -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=on -DGMX_USE_OPENCL=on -DCMAKE_C_COMPILER=mpicc -DCMAKE_CXX_COMPILER=mpicxx -DGMX_MPI=on
make
make check
sudo make install
(then i would just export PATH=$PATH(PATH TO BIN) and execute GROMACS commands)

On previous versions of MacOS, I could run mdrun with 1 CPU thread and the GPU. I never managed to get multiple CPU threads working on a mac. I modified my Cmake command in numerous ways. Still, mdrun always fails with segmentation fault or other fatal errors when -nt or -ntmpi is set to more than 1.

I would like to kindly ask you how I should modify the Cmake options?

Once again, i think dialing back to an earlier OS X would resolve such issues. But, I am not sure on which version of the OS X GROMACS would compile properly.

I do not know which version would be the best. For me, I am running 10.13 (High Sierra) on a 2012 Macbook Air, and Gromacs works well for me with and without MPI, running up to four threads in parallel.

Which version of GCC or other C/C++ compiler do you have installed? Have you tried compiling with -DGMX_MPI=off? Does it work if you only run

$ cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON

as in the installation guide?

Edit: Also, do you use mpiexec to run your MPI version of Gromacs? E.g., if you have compiled with MPI and just run gmx_mpi mdrun, you will run into a segmentation fault. If you run mpiexec gmx_mpi mdrun it should work.

Thank you for your valuable help.
I have tried all of your recommendations.
And yes, i ran gmx_mpi and it did work but produced really weird results. On linux the same system gets simulated normally. However, weird and unnecessary errors leads me to think the system is not eqed well or something else which is not the case since the exat same thing gets done perfectly fine on a VMware linux.
I reverted to high sierra. Installing gromacs on both catalina and highsierra can be done via homebrew(brew install gromacs) but that is so slow when I run mdrun. it is like 10 days on a quadcore for a not so huge system.
Could you please recommend what versions of compilers, openmpi, gcc, etc. I should install to get the package compiled in its optimal way?
I am currently running high sierra and the only thing that works is automated install via homebrew which does not turn OpenCL/GPU on, and is very very slow (but does work seemingly)

With best regards

My configuration is simply:

brew install gcc # current version is 10.2, binary names are gcc-10 and g++-10
brew install open-mpi # version 4.0.5
cmake -DCMAKE_C_COMPILER=gcc-10 -DCMAKE_CXX_COMPILER=g++-10 # [and everything else]

But if you do want CUDA or OpenCL, I do not know how to install and configure those settings.

Many thanks for your prompt response. I went back to Highsierra and recompiled gromacs with GPU enabled. Open MPI never gets compiled right and i run gromacs with one cpu thread and gpu. It is not fast enough and also fails with “bond length not finite”. I reduced time step to 0.001(instead of 0.002) and it works fine now.

the only issue now is that i can only run 2.3 ns per day. Which is cool but nothing near enough for what i need.
I am using a 2017 maxed-out imac with a quadcore 4.2ghz i7 and a really great gpu (Radeon Pro 580 8gb vram) and 16gb ram.
This is enough for possibly running 30-40ns per day. If i could only enable open mpi the speed would be increased to something like that i believe as it is now only using 12 percent cpu power and the GPU. I will compile using your settings and will get back to you ASAP. Once again, many thanks for the much valuable help.

1 Like