Gromacs 2021 multiple time step algorithm

GROMACS version: 2021
GROMACS modification: No

We see that the gmx-2021 release notes suggest an efficient multiple-time-step algorithm has been introduced in this version. I found some information about key-words in the recently published user-manual of gmx-2021. However, in the documentation part, we could not find any reference or any discussion of the underlying algorithm of this multiple-time-step algorithm.
I wanted to check if there is a test-case that has been published on multiple-time-step algorithms that one can benchmark against.

1 Like

I realize now that I forgot to add a section on MTS to the manual (I had planned to do this before the release). The algorithm is standard r-RESPA. The word “efficient” refers to the implementation. My original goal was to replace the GROMACS virtual site setup with 4 fs time-step with MTS where only bonds and angles are evaluated every 2 fs, but that turns out to lead to instabilities with the Amber, and likely also Charmm, force field that occur infrequently (every few 100 ns).

Thank you so much. :)

To follow up, are there any benchmarks systems for the speed increase? I’m currently running a lipid membrane system in water (CHARMM36m force field) with GROMACS 2021 (GPU). I see speeds of ~80 ns/day with 2021 (with MTS enabled) vs ~70 ns/day with 2020.5. Unfortunately, it is not a perfect 1-to-1 comparison since MTS apparently does not support bonded and PME calculations on the GPU (which would be nice).

What MTS scheme are you running? If you compute bondeds every step then both bondeds and PME should be able to run on the GPU.

But with MTS the force reduction and integration is done on the CPU. So if you have a fast GPU and slow CPU MTS might actually be slower.

I should have been more specific - I am trying to run with the PME on the GPU with the direct communication feature, but I get this error/warning that says multiple time stepping is not supported:

Reading file memb_treh_eq1.tpr, VERSION 2021 (single precision)
This run will default to '-update gpu' as requested by the GMX_FORCE_UPDATE_DEFAULT_GPU environment variable. GPU update with domain decomposition lacks substantial testing and should be used with caution.

Enabling GPU buffer operations required by GMX_GPU_DD_COMMS (equivalent with GMX_USE_GPU_BUFFER_OPS=1).

This run has requested the 'GPU halo exchange' feature, enabled by the GMX_GPU_DD_COMMS environment variable.

GMX_GPU_PME_PP_COMMS environment variable detected, but the 'GPU PME-PP communications' feature was not enabled as PME is not offloaded to the GPU.
Changing nstlist from 20 to 100, rlist from 1.2 to 1.286

Update task on the GPU was required, by the GMX_FORCE_UPDATE_DEFAULT_GPU environment variable, but the following condition(s) were not satisfied:

With domain decomposition, PME must run fully on the GPU.
Multiple time stepping is not supported.

Will use CPU version of update.

Also, to update, there appears to be an error even when direct communication is not enabled:

GROMACS:      gmx mdrun, version 2021
Executable:   /home/dkozuch/programs/gromacs_210gt/bin/gmx_210gt
Data prefix:  /home/dkozuch/programs/gromacs_210gt
Working dir:  /scratch/gpfs/dkozuch/membranes/memb_treh/charmm/DPPC/v2/treh0/run_files
Command line:
  gmx_210gt mdrun -v -deffnm memb_treh_eq1 -ntmpi 4 -ntomp 7 -pin on -nb gpu -bonded gpu -pme gpu -npme 1

Reading file memb_treh_eq1.tpr, VERSION 2021 (single precision)

Program:     gmx mdrun, version 2021
Source file: src/gromacs/taskassignment/decidegpuusage.cpp (line 493)
Function:    bool gmx::decideWhetherToUseGpusForBonded(bool, bool, gmx::TaskTarget, const t_inputrec&, const gmx_mtop_t&, int, bool)
MPI rank:    0 (out of 4)

Inconsistency in user input:
Bonded interactions cannot run on GPUs: Cannot run with multiple time

This happens with :

integrator              = md
mts                     = yes
mts-level2-forces       = longrange-nonbonded

If you know how to apply diffs, automatically or manuall, you can try with this change:

diff --git a/src/gromacs/listed_forces/gpubonded_impl.cpp b/src/gromacs/listed_forces/gpubonded_impl.cpp
index ff62325723..17b065ebbd 100644
--- a/src/gromacs/listed_forces/gpubonded_impl.cpp
+++ b/src/gromacs/listed_forces/gpubonded_impl.cpp
@@ -1,7 +1,7 @@
  * This file is part of the GROMACS molecular simulation package.
- * Copyright (c) 2018,2019,2020, by the GROMACS development team, led by
+ * Copyright (c) 2018,2019,2020,2021, by the GROMACS development team, led by
  * Mark Abraham, David van der Spoel, Berk Hess, and Erik Lindahl,
  * and including many others, as listed in the AUTHORS file in the
  * top-level source directory and at
@@ -49,6 +49,7 @@
 #include "gromacs/listed_forces/gpubonded.h"
 #include "gromacs/mdtypes/inputrec.h"
+#include "gromacs/mdtypes/multipletimestepping.h"
 #include "gromacs/topology/topology.h"
 #include "gromacs/utility/stringutil.h"
@@ -149,9 +150,12 @@ bool inputSupportsGpuBondeds(const t_inputrec& ir, const gmx_mtop_t& mtop, std::
-    if (ir.useMts)
+    if (ir.useMts
+        && (forceGroupMtsLevel(ir.mtsLevels, MtsForceGroups::Pair) > 0
+            || forceGroupMtsLevel(ir.mtsLevels, MtsForceGroups::Dihedral) > 0
+            || forceGroupMtsLevel(ir.mtsLevels, MtsForceGroups::Angle) > 0))
-        errorReasons.emplace_back("Cannot run with multiple time stepping");
+        errorReasons.emplace_back("Cannot run with multiple time stepping for bondeds");
     if (ir.opts.ngener > 1)