Mpirun for mdrun

GROMACS version: 2021.5
Here post your question:

I compiled GROMACS with thread-MPI so I can use the image on LSF and I also compiled openMPI with LSF. I am getting the canonical Cannot rename checkpoint file; maybe you are out of disk space?

Me and my infrastructure team have tried many things but no fix. I am running on 4 V100 Tesla GPUs and the command runs like:

/usr/local/bin/mpirun -np 24 /usr/local/gromacs/bin/gmx mdrun -deffnm md_0_1 -ntmpi 8 -ntomp 3 -npme 4 -ntomp_pme 1 -nb gpu

I am also going to try running with -cpi -noappend but let me know how I can attached my Dockerfile if that will help. Image is ‘kboltonlab/test_cuda:1.4’
/usr/local/bin/mpirun -np 24 /usr/local/gromacs/bin/gmx mdrun -deffnm md_0_1 -ntmpi 8 -ntomp 3 -npme 4 -ntomp_pme 1 -nb gpu -cpi -noappend

Hello i am a beginner in Gromacs.
I am using MPI Program. I want to run last production command of md simulation

mpirun -np $NUM_CPU gmx mdrun -ntomp 1

set init = step3_input
set mini_prefix = step4.0_minimization
set equi_prefix = step4.1_equilibration
set prod_prefix = step5_production
set prod_step = step5


In the case that there is a problem during minimization using a single precision of GROMACS, please try to use

a double precision of GROMACS only for the minimization step.

gmx grompp -f {mini_prefix}.mdp -o {mini_prefix}.tpr -c {init}.gro -r {init}.gro -p -n index.ndx -maxwarn -1
gmx_d mdrun -v -deffnm ${mini_prefix}


gmx grompp -f {equi_prefix}.mdp -o {equi_prefix}.tpr -c {mini_prefix}.gro -r {init}.gro -p -n index.ndx
gmx mdrun -v -deffnm ${equi_prefix}


set cnt = 1
set cntmax = 10

while ( {cnt} <= {cntmax} )
@ pcnt = {cnt} - 1 set istep = {prod_step}{cnt} set pstep = {prod_step}${pcnt}

if ( ${cnt} == 1 ) then
    set pstep = ${equi_prefix}
    gmx grompp -f ${prod_prefix}.mdp -o ${istep}.tpr -c ${pstep}.gro -p -n index.ndx
    gmx grompp -f ${prod_prefix}.mdp -o ${istep}.tpr -c ${pstep}.gro -t ${pstep}.cpt -p -n index.ndx
gmx mdrun -v -deffnm ${istep}
@ cnt += 1


how can i run a simulation of 50ns .
what does this -ntomp1 means (written in first line of code)??

The threadMPI is an internal GROMACS threading scheme for cases when “real” MPI is unavailable. Running threadMPI-build of GROMACS with mpirun will not work (the ranks will not be able to communicate with each-other).

If you want to use GROMACS with mpirun, you need to build it with -DGMX_MPI=ON (it will disable threadMPI and use your “real” MPI). It will produce gmx_mpi binary, which works fine with mpirun.