Domain decomposition error + Setting MPI ranks compatible with custom domains

GROMACS version: GROMACS/2020-foss-2019b and GROMACS/2021-foss-2020b
GROMACS modification: Yes (HPC installations, info in linked log file)

Dear all,

I am simulating a protein-ligand complex in a rhombic dodecahedral box of ~409nm³, (box-X, box-Y, box-Z) ~= (8.33, 8.33, 5.89)nm. The system consists of 41672 atoms.

From the documentation, I knew I could use up to 16 processors on a single node without domain decomposition (DD), resulting in a performance of ~30ns/day.

When I use more than 16 cores, I get the fatal DD error (here, for 18 processors):
“1196 of the 49025 bonded interactions could not be calculated because some atoms involved moved further apart than the multi-body cut-off distance (0.835804 nm) or the two-body cut-off distance (1.59575 nm), see option -rdd, for pairs and tabulated bonds also see option -ddcheck”

However, the same log file reveals that the maximum distances for bonded interactions are quite smaller than the cut-off values:
“Initial maximum distances in bonded interactions:
two-body bonded interactions: 0.429 nm, LJ-14, atoms 1837 1845
multi-body bonded interactions: 0.488 nm, CMAP Dih., atoms 440 449
Minimum cell size due to bonded interactions: 0.537 nm”

My first question is whether the cut-off values are also for coulombic interactions, which act over a larger distance? If not, could someone tell me what is causing this error, or how I can ‘view’ the distances that are troublesome?
The error occurs for both GROMACS 2020 and 2021 (I normally work with 2020, but apparently this version did not output the error completely, while 2021 does). I add the log file for 2021, with 18 processors:
log file: step5_6.log - Google Drive
mdp file: mdout.mdp - Google Drive

I have tried setting -rdd to 1.4 and 1.6, but those result in another fatal error (initial cell size smaller than cell size limit). I tried setting custom domain cells with -dd (see below), to no avail. I have read the documentation (getting good performance) and the common errors. I find it hard to believe that 16 processors is the upper limit for my system of 40k+ atoms. (For example, I found a benchmark system of 20k atoms, where they used 8 nodes with 124 MPI ranks per node: docs.bioexcel.eu/gromacs_bpg/en/master/cookbook/cookbook DOT html [change the DOT])

Finally, since I use a custom force field for the ligand, I thought the problem may be rooted here. However, using CHARMM-GUI to get GROMACS run files for the protein only (using the PDB code) results in the same errors, albeit with less problematic bonds.


I thought I could still get a performance increase by using less domains, for which the system would (hopefully) be compatible with the cut-off distances. I tried 4 nodes with 16 or 18 processors, with a custom DD of -dd 2 2 1. The idea was to create 4 ranks, for 4 domains, each using 16 or 18 OpenMP (such that ntomp x nmpi = number of processors). I tested multiple settings of MPI -np and mdrun -ntomp, but GROMACS would always use more than 4 ranks, leading to the fatal DD error.

My second question is whether I can increase the performance using multiple nodes, without using too many ranks? I am no computer scientist, and I don’t really understand all the thread-related options. Could my simulation be sped up by ussing 4 domain cells and spreading the computational load across 4 nodes?

Of course, I hope that I can resolve the problem in my first question, which may turn the second question obsolete.

If you need any more information/input/…, I’d be happy to provide.

Best,
Wouter

Hello,

can you please provide your input for the error in the image, other image is for the reference.

What can I do to solve this error?

Run.sh file for the reference:

#!/bin/bash
#SBATCH -e slurm-%j.err
#SBATCH -o slurm-%j.out
#SBATCH -J 12
#SBATCH --partition=normal
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=32
#SBATCH --cpus-per-task=4
#SBATCH --exclusive

module load gromacs/2023.2
module unload anaconda3/2023.9

EM

mkdir -p em
gmx grompp -f /home/aswadurk/CG_test_whole/req_files/mdp_files/em.mdp -p topol.top -c sol/sol_neutral.gro -r sol/sol.gro -o em/em.tpr -maxwarn 2
#gmx mdrun -v -deffnm em/em
mpirun gmx_mpi mdrun -pin on -deffnm em/em -tableb /home/aswadurk/CG_test_whole/req_files/angle5_a0.xvg

NVT

mkdir -p nvt
gmx grompp -f /home/aswadurk/CG_test_whole/req_files/mdp_files/nvt.mdp -p topol.top -c em/em.gro -r sol/sol.gro -o nvt/nvt.tpr -maxwarn 2
#gmx mdrun -v -deffnm eq/eq
mpirun gmx_mpi mdrun -pin on -v -deffnm nvt/nvt -tableb /home/aswadurk/CG_test_whole/req_files/angle5_a0.xvg

NPT

mkdir -p npt
gmx grompp -f /home/aswadurk/CG_test_whole/req_files/mdp_files/npt.mdp -p topol.top -c nvt/nvt.gro -r sol/sol.gro -o npt/npt.tpr -maxwarn 2
#gmx mdrun -v -deffnm eq/eq
mpirun gmx_mpi mdrun -pin on -v -deffnm npt/npt -tableb /home/aswadurk/CG_test_whole/req_files/angle5_a0.xvg

MD

mkdir -p md
gmx grompp -f /home/aswadurk/CG_test_whole/req_files/mdp_files/md.mdp -p topol.top -c npt/npt.gro -o md/md.tpr -maxwarn 2
#gmx mdrun -v -deffnm md_1/md -tableb angle5_a0.xvg
mpirun gmx_mpi mdrun -pin on -v -deffnm md/md -tableb /home/aswadurk/CG_test_whole/req_files/angle5_a0.xvg

/home/aswadurk/miniconda3/bin/python /home/aswadurk/CG_test_whole/part_CG_SA_prep.py

Best,
Anand Wadurkar

Your system is almost certainly exploding. Follow the advice given in the image you posted, make sure that the EM stage worked OK and run the equilibrations longer.

Also, remove the -maxwarn 2 options to gmx grompp, unless you really know what you are doing. You are manually bypassing checks that are there to ensure that your input is not completely unsuitable.