GROMACS version: 2020.3
GROMACS modification: No
Greetings and thanks for your prior help! I have a maybe not so neophyte question despite being a neophyte.
I’ve “managed” (kind of) to port a custom force field to Gromacs (no errors thrown by grompp), but I can’t quite seem to test it. I’ve encountered a fatal error on starting mdrun where it states that Buckingham is not enabled with Verlet cutoff-scheme.
The nonbonded portion of our field is all buckingham. Is support for buckingham using Verlet cut-off scheme available in a newer version? Updating to a 2021 version of gromacs would be a relatively easy fix. If necessary, I would perhaps move back to an earlier version to get to “groups” scheme, but I’m worried I’ll lose the gpu speed if I do that.
Using the 2020.3 version of gromacs, this is kind of a fatal problem for me because the “group” alternative cut-off scheme is disabled.
Is there a reachable fix? Thanks!
The error message is outdated. Buckingham has never been supported with the Verlet cut-off scheme, which is the only option since the 2019 release. You have to use the 2018 release.
Ok, thank you. I can at least get a start that way.
How hard would it be for me to dig into the modern Gromacs form with GPU capability and switch L-J with Buckingham in the critical spots? I lose GPU by switching to the older version.
The main work is propagating the parameters. LJ has two parameters, Buckingham has three. Replacing r^12 by exp() in the kernel is straighforward.
My PI is interested in moving forward with some studies quickly and he has also talked about seeing if we can do it in a way that is responsible and beneficial to the community. I’ve been trying to look through the developer related links and references, down to getting into the Doxygen notes and studying how to approach the problem. I want to approach working on this as responsibly as possible and I would happily switch over to a different forum or the developer mailing list if asking these kinds of questions would be more fruitful there.
Is there currently a timeline for adding Buckingham potential functionality to the modern version of the Verlet cut-off method? If it’s faster for us to just wait for the functionality to roll out, maybe that would be better than us trying to get involved if people more familiar with the code are already working on it. Otherwise, I know that there are a limited number of hands; maybe we can do something small to help out.
There are no plans to reimplement Buckingham. But the parameter reading and i/o is still present and should be functional. Adding an exp() to the kernel is easy. The main work is setting the parameters in nbfp(_aligned) in nbnxn_atomdata_t::Params.
Thanks for the direction! I’ll join the dev email list and we’ll start a branch to see if we can create something of use.
Thanks for your help thus far. I have an additional question.
Before trying to figure out how to alter the code to implement three parameters, I dialed back to simply trying to learn how the current two parameter mechanism works. I’ve found what appear to be key spots in the inner kernel code and in the nbnxn module which are pivotal to calculating quantities related to Lennard-Jones 6-12 potentials.
My question is this: if I were to switch the code to a two parameter LJ 6-10 potential (slightly softer at small separations, closer to Buckingham without going to three parameters), I need to figure out where sigma and epsilon are converted to C6 and C12 internally in order to alter changing sigma and epsilon to C6 and C10 respectively. This doesn’t appear to occur in the nbnxn code.
I’m inferring that this must occur based on the observation that the OPLS-aa force field is read in as sigma and epsilon, whereas the nbnxn functions appear to work exclusively in C6 and C12. There are even several places I found where the code is dealing in sigma and epsilon, but each calculated as functions of C6 and C12.
Where does the initial conversion to C6 and C12 take place?
I may be able to get around worrying about this if I specify combination rule 1 in the defaults, it looks like. I can overwhelm the need by fully specifying the [ nonbond_params ].
Pardon me for yet another post today. The source code still seems to imply that C6 and C12 are entering as 6.0C6 and 12.0C12, presumably for force. If I’m switching to C10, I still need to find where 12.0 is being multiplied in to change to a 10.0.
These are set in:
but also divided again in:
Search for 12.0
I hope I didn’t miss any factors 12 elsewhere.
We should actually reppow in interaction_const_t everywhere.
Thanks for your help. I’ve been able to trace most of the logic you’re talking about.
Out of curiosity, where is the scalar 12.0 needed, if it’s being divided out again before going into the kernel? Having stumbled over reppow in forcerec, I must say that looks useful if enacted.
A scalar 1/12 is needed to compute the energies. The forces have the factor 12 and we most often only need to compute forces, so incorporating the prefactor into the coefficient can save a floating point operation.
An additional related, maybe stupid, question: is there any place relevant to mdrun where energy or force are calculated outside of just the inner kernel in nbnxm? It confused me a little bit that the factor of 12.0 is multiplied in at forcerec and then divided out again in atomdata.cpp before apparently reaching the kernel. In trying to enable a Gromacs version that can use LJ6-10 nonbonding potentials, I want to make certain if there are any other alterations that need to be made beyond the Verlet kernel to make the program fully consistent with C10 in place of C12.
I understand why there are no plans to put Buckingham (or reppow) back in. That inner kernel seems like it would be difficult to optimize around other potentials simultaneously.