Gmx velacc nonsense?

Hi all,

It’s been a while since we calculated velocity autocorrelations, so here is a question.

We have a system (the vast majority is water, NVT, v-rescale thermostat) that’s perfectly happy with all the physics we’re simulating, but when running gmx velacc with or without normalization, what we get in the autocorrelation column of the output a number like 1 (with normalization) or 2 (without normalization) for the first row and then either zeros or outputs like 0.00003 in the rest of the rows.
The velocity output frequency is every 10 ps and the simulated time is 100 ns.

Does this mean that our system is so well behaved that autocorrelation finally achieved the ultimate decorrelated state for the output to hit the limit of output precision, or are we doing something wrong?
The base command is

gmx velacc -f traj.trr -s run.tpr -o velacc.xvg

and then we just play around with -normalize and -acflen, none of which modifies the output in the qualitative sense. Any and all help will be appreciated.

GROMACS version: 2022.1

What is the time scale that you expect from the phenomenon you want to see? In my experience (mostly force autocorrelations of water) the autocorrelation functions show most of the signal in ranges < 2ps. You can see it really well also in older papers. Try to do a short run printing every few fs and see if the autocorrelation is still zero!

Indeed. Liquid state MD is highly overdamped. All velocity correlations decay to nearly immeasurable within 2 ps.

Sorry, forgot to update my post earlier, because we figured it out. Indeed, this is the case and the correlation time constant is around 0.1-0.2 ps, which tracks with the v-rescale settings. Thanks for responding!

The v-rescale settings do not affect the velocity autocorrelation of atoms (or theoretically yes, in practice not).