GROMACS version: 2023.2
GROMACS modification: No
I have compiled gromacs-cp2k with CUDA. The mdrun gpu utilization is zero always and I see occasional spike in gpu usage.
GROMACS version: 2023.2
GROMACS modification: No
I have compiled gromacs-cp2k with CUDA. The mdrun gpu utilization is zero always and I see occasional spike in gpu usage.
In order to help you, we would need more information. Could you post the log file of a simulation, starting from the top of the file until the first simulation step?
Gromacs 2022.4 was built with cp2k for CUDA.
The following are the commands used for the simulation.
############################classical######################
gmx_cp2k solvate -cp conf.gro -o conf.gro -p topol.top -shell 10
gmx_cp2k grompp -f em.mdp -p topol.top -c conf -o egfp-genions.tpr -maxwarn 10
gmx_cp2k genion -s egfp-genions.tpr -p topol.top -o conf.gro -neutral
gmx_cp2k grompp -f em.mdp -p topol.top -c conf -o egfp-em.tpr
gmx_cp2k mdrun -s egfp-em.tpr -deffnm egfp-em
gmx_cp2k grompp -f md-mm-nvt.mdp -p topol.top -c conf.gro -t egfp-em.trr -o egfp-mm-nvt.tpr
gmx_cp2k mdrun -s egfp-mm-nvt.tpr -deffnm egfp-mm-nvt -v
gmx_cp2k make_ndx -f conf.gro
> a 938-956
> name 18 QMatoms
> q
###########################QMMM################################################
gmx_cp2k grompp -f md-qmmm-nvt.mdp -p topol.top -c conf.gro -t egfp-mm-nvt.trr -n index.ndx -o egfp-qmmm-nvt.tpr -maxwarn 1
gmx_cp2k mdrun -s egfp-qmmm-nvt.tpr -deffnm egfp-qmmm-nvt -v
###############################################################################
mdrun performs as expected in the MM part with max usage. The low performance is observed in the qmmm mdrun.
As I said, please provide the log file (until the first MD step) from the gmx_cp2k mdrun -s egfp-qmmm-nvt.tpr -deffnm egfp-qmmm-nvt -v
that will provide more information.
There is also another post on the forum that might be of interest to you: GROMACS 2022+CP2K with GPU - #2 by dmorozov. Apparently not all GPUs are supported with CP2K.
Thank you very much.
Please look at the log file:
egfp-em.log (354.7 KB)
egfp-mm-nvt.log (27.4 KB)
egfp-qmmm-nvt.log (18.0 KB)
This question is also being answered here: Mdrun : An error occurred in MPI_Allreduce
I think we should continue there.
The QMMM in CP2K is not yet implemented in GPU, there was a plan, but not sure were it is now. So, when calculation hits the QMMM, you are being bottlenecked by CP2K cpu calculations. See here for the status of QMMM GPU implementation: