GROMACS version: 2021.5
GROMACS modification: Yes -DGMX_THREAD_MPI=ON
Here post your question:
I compiled GROMACS with thread-MPI so I can use the image on LSF and I also compiled openMPI with LSF. I am getting the canonical Cannot rename checkpoint file; maybe you are out of disk space?
Me and my infrastructure team have tried many things but no fix. I am running on 4 V100 Tesla GPUs and the command runs like:
I am also going to try running with -cpi -noappend but let me know how I can attached my Dockerfile if that will help. Image is ‘kboltonlab/test_cuda:1.4’ /usr/local/bin/mpirun -np 24 /usr/local/gromacs/bin/gmx mdrun -deffnm md_0_1 -ntmpi 8 -ntomp 3 -npme 4 -ntomp_pme 1 -nb gpu -cpi -noappend
Hello i am a beginner in Gromacs.
I am using MPI Program. I want to run last production command of md simulation
mpirun -np $NUM_CPU gmx mdrun -ntomp 1
set init = step3_input
set mini_prefix = step4.0_minimization
set equi_prefix = step4.1_equilibration
set prod_prefix = step5_production
set prod_step = step5
Minimization
In the case that there is a problem during minimization using a single precision of GROMACS, please try to use
a double precision of GROMACS only for the minimization step.
The threadMPI is an internal GROMACS threading scheme for cases when “real” MPI is unavailable. Running threadMPI-build of GROMACS with mpirun will not work (the ranks will not be able to communicate with each-other).
If you want to use GROMACS with mpirun, you need to build it with -DGMX_MPI=ON (it will disable threadMPI and use your “real” MPI). It will produce gmx_mpi binary, which works fine with mpirun.