Mdrun with CPU and GPU

GROMACS version:2023
GROMACS modification: Yes/No

Hello to all.
I am running MD simulation on protein with CUDA enabled to test running times for CPU and GPU.
When I use the command “gmx mdrun -deffnm md_0_1”

Executable: /usr/local/gromacs/bin/gmx
Data prefix: /usr/local/gromacs
Working dir: /home/ssari/gromacs-work
Command line:
gmx mdrun -deffnm md_0_1

Back Off! I just backed up md_0_1.log to ./#md_0_1.log.10#
Reading file md_0_1.tpr, VERSION 2023 (single precision)
Changing nstlist from 10 to 100, rlist from 1 to 1.164

1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 1 rank on this node:
PP:0,PME:0
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PP task will update and constrain coordinates on the GPU
PME tasks will do all aspects on the GPU
Using 1 MPI thread
Using 12 OpenMP threads

Back Off! I just backed up md_0_1.xtc to ./#md_0_1.xtc.10#

Back Off! I just backed up md_0_1.edr to ./#md_0_1.edr.10#
starting mdrun ‘Protein in water’
500000 steps, 1000.0 ps.

Writing final coordinates.

Back Off! I just backed up md_0_1.gro to ./#md_0_1.gro.10#

                   Core t (s)   Wall t (s)        (%)
   Time:     3620.185      301.686     1200.0
                      (ns/day)    (hour/ns)

Performance: 286.391 0.084

After that I used the command “gmx mdrun -deffnm md_0_1 -nb gpu” to see the difference

Executable: /usr/local/gromacs/bin/gmx
Data prefix: /usr/local/gromacs
Working dir: /home/ssari/gromacs-work
Command line:
gmx mdrun -deffnm md_0_1 -nb gpu

Back Off! I just backed up md_0_1.log to ./#md_0_1.log.9#
Reading file md_0_1.tpr, VERSION 2023 (single precision)
Changing nstlist from 10 to 100, rlist from 1 to 1.164

1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 1 rank on this node:
PP:0,PME:0
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PP task will update and constrain coordinates on the GPU
PME tasks will do all aspects on the GPU
Using 1 MPI thread
Using 12 OpenMP threads

Back Off! I just backed up md_0_1.xtc to ./#md_0_1.xtc.9#

Back Off! I just backed up md_0_1.edr to ./#md_0_1.edr.9#
starting mdrun ‘Protein in water’
500000 steps, 1000.0 ps.

Writing final coordinates.

Back Off! I just backed up md_0_1.gro to ./#md_0_1.gro.9#

                   Core t (s)   Wall t (s)        (%)
   Time:     3591.749      299.315     1200.0
                       (ns/day)    (hour/ns)

Performance: 288.660 0.083

The performance results are too close to each other. I am running Gromacs on WSL2 with CUDA toolkit enabled. Both of these runs, Gromacs used both CPU and GPU (I checked it via HWInfo). Should I use different commands ? Is it possible to use mdrun on CPU or GPU seperately ? If so, what must I do to run it on CPU or GPU ?
I am new to GROMACS and appreciate the help.

They are because GROMACS uses GPUs in both cases:

PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PP task will update and constrain coordinates on the GPU
PME tasks will do all aspects on the GPU

By default, when GROMACS is built with GPU support, it tries to offload as much as possible to GPU. You can pass -nb cpu to disable GPU offloading.

You might read more about how GROMACS uses GPUs in the User Guide: https://manual.gromacs.org/documentation/current/user-guide/mdrun-performance.html#running-mdrun-with-gpus.

1 Like

Thank you for your help it worked for me.

1 Like