GROMACS version: 2026.1
GROMACS modification: Yes - Patched with PLUMED/CP2K
Hi,
I am currently trying to apply the AWH method for a small QM/MM system (two amino acids in water). Unfortunately, I am facing an Assertion failure that is not entirely clear to me:
-------------------------------------------------------
Program: gmx mdrun, version 2026.1
Source file: src/gromacs/applied_forces/awh/biasstate.cpp (line 1743)
Function: gmx::BiasState::getSharedPointCorrelationIntegral(int) const::<lambda()>
Assertion failed:
Condition: sharedCorrelationTensorTimeIntegral_[gridPointIndex][i] == 0
Correlation tensor time integral of unvisited points should be 0.
For more information and tips for troubleshooting, please check the GROMACS
website at https://manual.gromacs.org/current/user-guide/run-time-errors.html
-------------------------------------------------------
A similar issue popped up already earlier at 11857.
I have attached my .mdp file and the most important lines seem to be awh-nstsample and awh-nsamples-update. If I set both quantities to 1, the simulation runs without any assertion failure. Setting now awh-nstsample = 50 (roughly the autocorrelation time of my CV) and awh-nsamples-update = 100 leads consistently to the assertion after 200 steps. On the other hand, the combination awh-nstsample = 1 and awh-nsamples-update = 5000 terminates after 10 steps. I am wondering now, if this error is a bug due to any other unfavourable settings in my .mdp file or if this is a bug in GROMACS 2026.1. By the way, the system runs stable without any biases. I would like to stick to this version, because it implements a larger variety of functionals than earlier versions.
I would be very grateful for any answers or suggestions!
But your update interval is very large. In steps this is the product of awh-nsamples-update * awh-nstsample. I would suggest to try decreasing awh-nsamples-updatesignificantly. You could set it to 1 or 10.
Thanks @hess for the quick response. Indeed my simulation is stable if I set awh-nsamples-update to 1. I assumed naively that it would be better to sample more often before updating the bias potential. So there is no difference in updating after 50 samples or updating 50 times after 1 sample, except a computational overhead? Why does this assertion appear anyway for a larger number of samples?
AWH will always converge to the correct answer. If you update extremely infrequently, the convergence will be slower. There is a minor computational cost to samples. But the update is local, so it is very cheap.
I don’t know why the assertion failure appears. If I would, I would fix this bug. My guess is that we have never run in the regime where the first update occurs after you have sampled a large part of the, or the whole, range. But if it is something general, I should be able to reproduce it with a simple system.
Thanks for the clarification. Indeed, my collective variable is rather short (-0.125 nm to +0.125 nm) and it seems that its first part (-0.125 nm to 0.0 nm) is sampled relatively quickly.
I understand that updating extremely infrequently will make the convergence slower, however, what happens if I update too frequently. That is, my awh-nstsample underestimates the true correlation time (e.g., I would use awh-nstsample = 50 with awh-nsamples-update = 1 but the true correlation time of the CV is 100 steps). Does that lead again to a slow down of the convergence or could it even prevent the overcoming of potential barriers?
Okay, if updating every step is fine, then the combination awh-nstsample = 1 and awh-nsamples-update = 1 could be quite useful for my QM/MM setup, where the number of simulation steps is rather limited.
Note that data is very correlated, so it won’t give you noticeable more sampling than for example awh-nstsample=10. But with QM/MM the computational cost is also negligible.