GROMACS version: 2022-3
GROMACS modification: No
Hi,
using HPC with SLURM, I noticed loss of performance if MD runs in a node shared with other jobs using other GPUs (Tesla) with respect to use an exclusive node, where no jobs are running. Other jobs are other programs running on different GPUs in the same node. My system is large around 90k atoms and normally I get 200ns/d with 12cpus/1GPUs if not other jobs are present in the node. In case of share node, I get less than 70-80ns/d. Could it be due to the I/O reading?
Thanks
Fixed: I was using a shared filesystem NFS, so as soon as the node was busy with other programs the performance was going down.
Using TMPDIR in the node, everything was going as expected.