AWH with direction-relative | Assertion failed: Condition: weightSum > 0

GROMACS version: 2024.3-plumed_2.9.3
GROMACS modification: Yes/No

Hi,

we are working on a protein system in which we want to explore the PMF of a ligand moving out of the pocket into the solvent. We want to steer the movement of the ligand along a given vector, so we use AWH in combination with the ‘direction-relative’ pull geometry:

pull-group1-name = PHE_714_CA
pull-group2-name = LIGAND_N
pull-group3-name = PHE_714_C
pull-group4-name = B_BARREL_CA

The LIGAND_N index group is defined as a single atom in the ligand that is pushed along the vector, and PHE_714_CA is the Ca position of the residue that is used as reference. In the starting structure, the distance between LIGAND_N and PHE_714_CA is 0.85 nm. The vector itself is defined by the PHE_714_C atom and the B_BARREL_CA index group. This index group is a set of 6 atoms that surround the b-barrel opening through which we want the ligand to be pushed. The entire mdp file is given here:

=================================== SNIPPET STARTS HERE =========================================
; Run control
integrator = md
dt = 0.002
nsteps = 50000000000

; Output control
nstenergy = 5000
nstlog = 5000
nstxout = 0
nstxout-compressed = 5000000

; Neighbour searching
cutoff-scheme = Verlet
nstlist = 20
pbc = xyz
rlist = 1.2

; Electrostatics
coulombtype = PME
rcoulomb = 1.2
pme_order = 4
fourierspacing = 0.16

; Van der Waals
vdwtype = cutoff
rvdw = 1.2
vdw-modifier = force-switch
rvdw-switch = 1.0
DispCorr = no

; Temperature coupling
tcoupl = V-rescale
tc_grps = Protein_And_Ligand Water_And_Salt
tau_t = 0.1 0.1
ref_t = 300.0 300.0

; Velocity generation
gen_vel = no
gen_temp = 300.0
gen_seed = -1

; Pressure coupling
pcoupl = c-rescale
pcoupltype = isotropic
tau_p = 2.0
ref_p = 1.0
compressibility = 4.5e-5
refcoord-scaling = all

; Bonds
constraints = h-bonds
constraint_algorithm = LINCS
continuation = yes
lincs-iter = 1
lincs-order = 4

; Pull
pull = yes
pull-ngroups = 4
pull-ncoords = 1
pull-nstxout = 5000

pull-group1-name = PHE_714_CA
pull-group2-name = LIGAND_N
pull-group3-name = PHE_714_C
pull-group4-name = B_BARREL_CA

pull-coord1-geometry = direction-relative
pull-coord1-groups = 1 2 3 4
pull-coord1-type = external-potential
pull-coord1-potential-provider = AWH

; AWH
awh = yes
awh-nbias = 1
awh-nstout = 50000
awh-nstsample = 10
awh-nsamples-update = 100
awh1-target = constant
awh1-equilibrate-histogram = yes
awh1-ndim = 1
awh1-dim1-coord-index = 1
awh1-dim1-start = 0.5
awh1-dim1-end = 2.0
awh1-dim1-force-constant = 40000
awh1-dim1-diffusion = 5e-5
awh1-error-init = 10
awh-share-multisim = no
awh1-share-group = 1
awh1-dim1-cover-diameter = 0.5
=================================== SNIPPET STOPS HERE ===============================================================================

We generate the tpr file on the classic manner as usual:

$ gmx grompp -f 04-awh.mdp -o 04-awh.tpr -n index.ndx -p topol.top -c 03-npt.gro

and then start the mdrun for 100,000 steps:

$ gmx mdrun -deffnm 04-awh -nsteps 100000 -v

This leads to the following error:

=================================== SNIPPET STARTS HERE ================================================================================
:-) GROMACS - gmx mdrun, 2024.3-plumed_2.9.3 (-:

Executable: /usr/local/gromacs-2024-3/bin/gmx
Data prefix: /usr/local/gromacs-2024-3
Working dir: /home/hans/stijn/reset
Command line:
gmx mdrun -deffnm 04-awh -nsteps 100000 -v

Back Off! I just backed up 04-awh.log to ./#04-awh.log.3#
Reading file 04-awh.tpr, VERSION 2024.3-plumed_2.9.3 (single precision)
Overriding nsteps with value passed on the command line: 100000 steps, 200 ps
Changing nstlist from 20 to 100, rlist from 1.222 to 1.343

Update groups can not be used for this system because atoms that are (in)directly constrained together are interdispersed with other atoms

1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 1 rank on this node:
PP:0,PME:0
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PP task will update and constrain coordinates on the GPU
PME tasks will do all aspects on the GPU
Using 1 MPI thread
Using 32 OpenMP threads

Back Off! I just backed up 04-awh_pullx.xvg to ./#04-awh_pullx.xvg.3#

Back Off! I just backed up 04-awh_pullf.xvg to ./#04-awh_pullf.xvg.3#

Back Off! I just backed up 04-awh.xtc to ./#04-awh.xtc.3#

Back Off! I just backed up 04-awh.edr to ./#04-awh.edr.3#
starting mdrun ‘Protein and ligand in water’
100000 steps, 200.0 ps.
step 0Warning: Only triclinic boxes with the first vector parallel to the x-axis and the second vector in the xy-plane are supported.
Box (3x3):
Box[ 0]={ nan, nan, nan}
Box[ 1]={ nan, nan, nan}
Box[ 2]={ nan, nan, nan}
Can not fix pbc.

Warning: Only triclinic boxes with the first vector parallel to the x-axis and the second vector in the xy-plane are supported.
Box (3x3):
Box[ 0]={ nan, nan, nan}
Box[ 1]={ nan, nan, nan}
Box[ 2]={ nan, nan, nan}
Can not fix pbc.

Warning: Only triclinic boxes with the first vector parallel to the x-axis and the second vector in the xy-plane are supported.
Box (3x3):
Box[ 0]={ nan, nan, nan}
Box[ 1]={ nan, nan, nan}
Box[ 2]={ nan, nan, nan}
Can not fix pbc.


Program: gmx mdrun, version 2024.3-plumed_2.9.3
Source file: src/gromacs/applied_forces/awh/biasstate.cpp (line 1354)
Function: gmx::BiasState::updateProbabilityWeightsAndConvolvedBias(gmx::ArrayRef, const gmx::BiasGrid&, gmx::ArrayRef, std::vector<double, gmx::Allocator<double, gmx::AlignedAllocationPolicy> >*) const::<lambda()>

Assertion failed:
Condition: weightSum > 0
zero probability weight when updating AWH probability weights.

For more information and tips for troubleshooting, please check the GROMACS

website at Common errors when using GROMACS - GROMACS 2026.1 documentation

=================================== SNIPPET STOPS HERE ===============================================================================

It should be noted that a normal ‘pull’ AWH calculation with only the LIGAND_N and PHE_714_CA groups works fine. When disabling both the pull and AWH parts, the system is also calculating fine. We believe it has to do with the ‘direction-relative’ geometry that is being used, but we cannot figure out what is going wrong. Any help on this?

Thank you in advance,
Hans De Winter - University of Antwerp

This looks like a bug in pull or AWH code or something like a division by zero caused by your setup. Could you share the tpr or input files for grompp with me?

Thanks for looking into this. The files(topol.top, index.ndx, 04-awh.mdp, 04-awh.tpr, 03-npt.gro) can be downloaded from WeTransfer at https://we.tl/t-OyGQBwNwwvX2obVx

Cheers,
Hans

The 04-awh.tpr file runs without issues for me, both with 2024.3 and 2026. I used that the same setup as you: 1 MPI rank, 32 threads and GPU. The only other difference I see is that you use a build with Plumed. It would be good to exclude that that is causing issues by trying a build without

That is very strange. I compiled 2026.1 (without PLUMED) and gave it a try, and I got the following error:

==============================

(base) hans@trappist:~/Downloads$ gmx mdrun -deffnm 04-awh -nsteps 100000 -v
:-) GROMACS - gmx mdrun, 2026.1 (-:

Executable: /usr/local/gromacs-2026-1/bin/gmx
Data prefix: /usr/local/gromacs-2026-1
Working dir: /home/hans/Downloads
Command line:
gmx mdrun -deffnm 04-awh -nsteps 100000 -v

Reading file 04-awh.tpr, VERSION 2026.1 (single precision)
Overriding nsteps with value passed on the command line: 100000 steps, 200 ps
Changing nstlist from 20 to 100, rlist from 1.222 to 1.343

Update groups can not be used for this system because atoms that are (in)directly constrained together are interdispersed with other atoms

1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 1 rank on this node:
PP:0,PME:0
PP tasks will do non-perturbed short-ranged interactions on the GPU
PP task will update and constrain coordinates on the GPU
PME tasks will do all aspects on the GPU
Using 1 MPI thread
Using 32 OpenMP threads

starting mdrun ‘Protein and ligand in water’
100000 steps, 200.0 ps.
step 0Warning: Only triclinic boxes with the first vector parallel to the x-axis and the second vector in the xy-plane are supported.
Box (3x3):
Box[ 0]={ nan, nan, nan}
Box[ 1]={ nan, nan, nan}
Box[ 2]={ nan, nan, nan}
Can not fix pbc.

Warning: Only triclinic boxes with the first vector parallel to the x-axis and the second vector in the xy-plane are supported.
Box (3x3):
Box[ 0]={ nan, nan, nan}
Box[ 1]={ nan, nan, nan}
Box[ 2]={ nan, nan, nan}
Can not fix pbc.

Warning: Only triclinic boxes with the first vector parallel to the x-axis and the second vector in the xy-plane are supported.
Box (3x3):
Box[ 0]={ nan, nan, nan}
Box[ 1]={ nan, nan, nan}
Box[ 2]={ nan, nan, nan}
Can not fix pbc.


Program: gmx mdrun, version 2026.1
Source file: src/gromacs/applied_forces/awh/biasstate.cpp (line 1362)
Function: gmx::BiasState::updateProbabilityWeightsAndConvolvedBias(gmx::ArrayRef, const gmx::BiasGrid&, gmx::ArrayRef, std::vector<double, gmx::Allocator<double, gmx::AlignedAllocationPolicy> >*) const::<lambda()>

Assertion failed:
Condition: weightSum > 0
zero probability weight when updating AWH probability weights.

For more information and tips for troubleshooting, please check the GROMACS

website at Common errors when using GROMACS - GROMACS 2026.1 documentation

=========================================

In a second attempt I started from scratch with a bigger box and longer NPT equilibration, and I got the same issues…