Slow NPT simulation

GROMACS version:
GROMACS modification: Yes/No
Here post your question

Hello,

I have a small system containing 134 molecules of water. I managed to run some simulations on it without problems. However, now, for some reason, the simulation runs very slowly. The .mdp file is:

; Run control
integrator = sd ; Langevin dynamics
tinit = 0
dt = 0.001
nsteps = 20000000 ; 20 ns
nstcomm = 100

; Output control
nstxout = 500
nstvout = 500
nstfout = 0
nstlog = 500
nstenergy = 500
nstxout-compressed = 0

; Neighborsearching and short-range nonbonded interactions
cutoff-scheme = verlet
nstlist = 20
ns_type = grid
pbc = xyz
rlist = 0.3

; Electrostatics
coulombtype = PME
rcoulomb = 0.5

; van der Waals
vdwtype = cutoff
vdw-modifier = potential-switch
rvdw-switch = 0.4
rvdw = 0.5

; Apply long range dispersion corrections for Energy and Pressure
DispCorr = EnerPres

; Spacing for the PME/PPPM FFT grid
fourierspacing = 0.12

; EWALD/PME/PPPM parameters
pme_order = 6
ewald_rtol = 1e-06
epsilon_surface = 0

; Temperature coupling
; tcoupl is implicitly handled by the sd integrator
tc_grps = system
tau_t = 1.0
ref_t = 298

; Pressure coupling is on for NPT
Pcoupl = C-rescale
tau_p = 1.0
compressibility = 4.5e-05
ref_p = 1.0

; Free energy control stuff
free_energy = no

; Do not generate velocities
gen_vel = no

; options for bonds
constraints = h-bonds ; we only have C-H bonds here

; Type of constraint algorithm
constraint-algorithm = lincs

; Constrain the starting configuration
; since we are continuing from NVT
continuation = yes

; Highest order in the expansion of the constraint coupling matrix
lincs-order = 12

I tried it on two computers, with and without GPU support, and in both cases the simulation is very slow. Do you know what could be causing it?

There are quite a few things that I would consider changing:

  • integrator = sd Is there a specific reason why you are using the Langevin integrator? It is slower than the md integrator, especially if you have a GPU, as the sd integrator cannot offload updates and constraints to the GPU.
  • dt = 0.001 Why such a short time step? 0.002 is almost always OK, provided that you constrain h-bonds (like you do). If you need to take such a short time step you have probably not equilibrated the system enough. If necessary you could run with 0.001 for, e.g., 25ps (25000 steps) or something. Then you can probably switch to 0.002 and run this longer simulation.
  • nstxout = 500 Why write coordinates so often? Why write uncompressed coordinates at all?
  • nstvout = 500 Why are you writing velocities?
  • nstlog = 500 Why are you writing the log file so often?
  • nstenergy = 500 Why are you writing energies so often?

I’d suggest:

nstxout = 0
nstvout = 0
nstfout = 0
nstlog = 20000
nstenergy = 20000
nstxout-compressed = 20000

or something along those lines, even those values are unnecessarily low. I don’t think you need more than 200 frames in your output. But you will have to decide what you really need.

  • rcoulomb = 0.5 What force field are you using? This is very low, even if a higher value will make your simulations slower the results will at least have a chance to be reasonable. Use what is recommended for your force field.
  • rvdw-switch = 0.4 See above.
  • rvdw = 0.5 See above.
  • pme_order = 6 Why not the default 4? You will not be able to run PME on GPU or get the PME kernel SIMD accelerated if you use 6.
  • ewald_rtol = 1e-06 Why not the default 1e-05?
  • lincs-order = 12 Why not the default 4?

Thanks for your considerations, I’ve taken into account most of them. Concerning the rcoulomb and rdw values, I am using low values because my simulation box is very small (1.5). I am using the OPLS-AA force field. What values would you recommend me in this case? I am fairly new to GROMACS and haven’t found anything on internet.

You should not change rcoulomb and rvdw depending on your box size. The force field is parameterized for certain cutoff values. I would recommend making a larger simulation box, if possible.

To analyze the results I actually need to perform an iteration over all the atoms for the purposes of my work. That’s why I wanted to limit the size of the box, otherwise the kernel crashes during the analysis. Is there another alternative? When I use the recommended values of rlist etc GROMACS tells me to change it.

You need a box that is large enough to work with the force field. What kind of analysis are you doing? Are you using a GROMACS tool for the analysis? Do you know why the analysis kernel crashes with only a few thousand water molecules?

I am coding hydrogen bond autocorrelation functions with Python. I’m not using GROMACS for this. I’m testing a new function and for this one I need to iterate over all atoms for each frame. For a box containing 500 molecules the kernel ends up crashing.

OK. In that case I understand why you needed to write coordinates fairly often (even if compressed output should be good enough).

I’m afraid I don’t have any solution to how to solve your crashes, from the GROMACS side. If the script cannot be fixed to work with a simulation box that is compatible with your force field then I’m afraid alternative solutions may have to be found.