GROMACS version: 2024.1
GROMACS modification: No
Here post your question
I’m using RTX 4090. I’m running just gmx mdrun -deffnm md_0_1
and I heard that gromacs automatically detects cuda core and compute on it by default. Is this correct or should I have to add -gpu_id 0
arg?
Dear @uni
Take a look at what GROMACS prints as output when you run mdrun
to check what is actually using on the hardware level. 99% it’s saying that it is finding one GPU and is automatically mapping some tasks on it! Usually the flag -gpu_id
is useful if you have more than one GPU and/or you want to map specifically some tasks (with the additional -gputasks
flag).
Thanks @obZehn
I am doing energy minimization and can you plz help me figure out other args like -ntmpi, -ntomp, -ntomp_pme, nstep, algorithm etc? I want to run the simulation faster and hopefully get the most minimized energy.
For minimising the energy fine tuning these parameters is a bit useless as this step usually lasts a few seconds/tens of seconds and as such it has no impact on the overall simulation time needed. Generally, the only flag that I use all the time is -pin on
.
For “real” MD simulations, e.g. NpT/NVT/NVE where you actually have dynamics, I would say that the main combinations of flags I usually tune are ntmpi
, ntomp
and gpu_id
if I have more than one GPU per node (if I have access to all of them). For non-mpi runs, where the simulation is confined within one node, I usually rely on GROMACS built-in thread mpi and I find that 1 to 2 mpi per GPU and 4 to 10 CPUs per mpi give me the best results. This depends however on the system, in particular the number of atoms, and on the hardware of the node. Most of the times actually GROMACS does a very good job at finding alone the best combination and running without specific flags already gives top performances.