Water overlap when editconf with non cubic box (truncated octahedron/dodecahedron)

GROMACS version: 2022
GROMACS modification: No

Hi all,

after an apparently unproblematic minimization, my NVT equilibration fails when applying constraints due to overlapping waters*, which I was able to find using cpptraj’s check command:
image

This repeats itself with any kind of box that is not cubic:

gmx pdb2gmx -ignh -f mol.pdb -o mol.gro -water tip3p -ff amber99sb-ildn
# this will fail due to `-bt dodecahedron`
gmx editconf -f mol.gro -d 1 -bt dodecahedron -o box_mol.gro -c
gmx solvate -cp box_mol.gro -cs spc216.gro -o sol_mol.gro -p topol.top
gmx grompp -f ../../mdp/min.mdp -c sol_mol.gro -p topol.top -o ions.tpr -maxwarn 5
echo -e "SOL" | gmx genion -s ions.tpr -o ion_mol.gro -p topol.top -pname NA -nname CL -neutral
gmx grompp -f ../../mdp/min.mdp -c ion_mol.gro -p topol.top -o min_mol.tpr -maxwarn 5
# Runs smoothly:
gmx_thread_mpi mdrun -ntomp $OMP_NUM_THREADS -ntmpi $num_mpi -nb gpu -pin on -v -deffnm min_mol

# NVT:
gmx grompp -f ../../mdp/nvt.mdp -c min_mol.gro -p topol.top -o nvt_mol.tpr -r min_mol.gro -maxwarn 5
# This will fail:
gmx_thread_mpi mdrun -ntomp $OMP_NUM_THREADS -ntmpi $num_mpi -nb gpu -pme gpu -bonded gpu -deffnm nvt_mol

I would appreciate any help regarding this issue. I’m attaching all the relevant files.

Thanks!

*: H and O sharing the exact same coordinates: 3.403 10.911 0.506

I can not reproduce your results.

Is the overlap already present in the output of the energy minimization that you use to start the NVT simulation with?
If so, I could image that could be very unlucky that gmx solvate put an hydrogen nearly exactly on top of an oxygen which might result in a larger attractive force between H-O that the LJ repulsion betwenn O-O.

Yes, the attached dodeca_mol.gro is that output.

If so, I could image that could be very unlucky that gmx solvate put an hydrogen nearly exactly on top of an oxygen which might result in a larger attractive force between H-O that the LJ repulsion betwenn O-O.

Right, but gmx solvate should detect that overlap, right?

I can not reproduce your results.

This result with the non cubic boxes has been consistently reproducible for me. Curiously, I’ve also managed to get overlapping waters with a cubic box once, which left me perplexed. I gave it another go, and this time it went away. Sadly, I don’t have those files.

What I do know now, it’s that this is probably an IBM issue. I just tried to reproduce this on an x86 machine and the dodecahedron box worked just fine.
I guess I’ll run gmx solvate on other machines for now.

Thanks for checking it out.

PD: If it’s any use, here is the output for my gromacs installation (they named it gmx_thread_mpi)

                     :-) GROMACS - gmx_thread_mpi, 2022 (-:

Executable:   /cineca/prod/opt/applications/gromacs/2022/spectrum_mpi--10.4.0--binary/bin/gmx_thread_mpi
Data prefix:  /cineca/prod/opt/applications/gromacs/2022/spectrum_mpi--10.4.0--binary
Working dir:  /m100_work/AIRC_Fortun21/barletta/d11/run
Command line:
  gmx_thread_mpi --version

GROMACS version:    2022
Precision:          mixed
Memory model:       64 bit
MPI library:        thread_mpi
OpenMP support:     enabled (GMX_OPENMP_MAX_THREADS = 128)
GPU support:        CUDA
SIMD instructions:  IBM_VSX
CPU FFT library:    fftw-3.3.8-vsx
GPU FFT library:    cuFFT
TNG support:        enabled
Hwloc support:      disabled
Tracing support:    disabled
C compiler:         /cineca/prod/opt/compilers/gnu/8.4.0/none/bin/gcc GNU 8.4.0
C compiler flags:   -mcpu=power9 -mtune=power9 -mvsx -pthread -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -O3 -DNDEBUG
C++ compiler:       /cineca/prod/opt/compilers/gnu/8.4.0/none/bin/g++ GNU 8.4.0
C++ compiler flags: -mcpu=power9 -mtune=power9 -mvsx -pthread -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -fopenmp -O3 -DNDEBUG
CUDA compiler:      /cineca/prod/opt/compilers/cuda/11.0/none/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2020 NVIDIA Corporation;Built on Thu_Jun_11_22:25:59_PDT_2020;Cuda compilation tools, release 11.0, V11.0.194;Build cuda_11.0_bu.TC445_37.28540450_0
CUDA compiler flags:-std=c++17;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-Wno-deprecated-gpu-targets;-gencode;arch=compute_53,code=sm_53;-gencode;arch=compute_80,code=sm_80;-use_fast_math;-D_FORCE_INLINES;-mcpu=power9 -mtune=power9 -mvsx -pthread -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -fopenmp -O3 -DNDEBUG
CUDA driver:        11.0
CUDA runtime:       11.0

I didn’t find the overlapping atoms in the gro file in your archive.

We usually do not attribute issues observed in GROMACS to compiler bug easily. But as this tool is used by nearly everyone, has not been modified for some time and we never heard about such an issue before, I suspect this could be some IBM related compiler bug.

You’ll see them in the raw version of the dodeca_mol.gro file at lines 43035:

12090SOL    HW243033   3.403  10.911   0.506

and 59941:

17726SOL     OW59939   3.403  10.911   0.506

I suspect this could be some IBM related compiler bug

It could very well be, and it looks like a very nasty bug to track. So I’ll just leave the report here.

Thanks for the help! Next time I’ll try to reproduce my errors on different architectures.